The motivation for this analysis was to look at the stability of the representations of the stimuli for each trial. Theoretically, while there is only one (or very few) way to do the task correctly, there are many ways to do it incorrectly. The stability of the multivariate representations of each trial might give us insight into how each subject is doing the task, which might have relationships with behavioral or clinical measures.
This analysis was accomplished first by calculating inter-trial similarity: a multivariate pattern was extracted for each trial for each subject from either a bilateral fusiform mask, or a DFR delay load effect mask. Then, trials were broken into correct and incorrect for low and high load trials. From there, each trial was corrrelated to a subject specific “correct trial” template. For incorrect trials, each trial was simply correlated to the mean of all the correct trials. For each correct trial, each trial was correlated to the mean of all correct trials excluding itself.
This resulted in each subject having 64 correlations - one for each trial. We then averaged over trials for four separate conditions - high vs low load, and correct vs incorrect. Thus, each subject ended up with four correlations at each TR, one for each of these conditions, which is what will be used in the present analysis. There should be a grain of caution taken with the low load incorrect trials, because subjects had a low number of trials for this condition.
library(dplyr)
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
library(psych)
library(ggplot2)
##
## Attaching package: 'ggplot2'
## The following objects are masked from 'package:psych':
##
## %+%, alpha
library(reshape2)
library(rmatio)
library(patchwork)
library(knitr)
library(kableExtra)
##
## Attaching package: 'kableExtra'
## The following object is masked from 'package:dplyr':
##
## group_rows
load('data/behav.RData')
load('data/DFR_split_groups_info.RData')
load('data/split_groups_info.RData')
# remove outlier from BPRS
p200_clinical$BPRS_TOT[27] <- NA
p200_data$BPRS_TOT[27] <- NA
source('helper_fxns/split_into_groups.R')
se <- function(x) {
sd(x,na.rm=TRUE)/sqrt(length(x[!is.na(x)]))
}
similarity_temp <- read.mat('data/intertrial_similarity_fusiform.mat')
for (i in seq.int(14,17)){
similarity_temp[[i]] <- data.frame(similarity_temp[[i]])
similarity_temp[[i]][similarity_temp[[i]]==0] <- NA
similarity_temp[[i]]$PTID <- constructs_fMRI$PTID
}
cond_avgs <- data.frame(matrix(nrow=4,ncol=14))
cond_avgs[1,] <- colMeans(similarity_temp[[14]][,1:14],na.rm=TRUE)
cond_avgs[2,] <- colMeans(similarity_temp[[15]][,1:14],na.rm=TRUE)
cond_avgs[3,] <- colMeans(similarity_temp[[16]][,1:14],na.rm=TRUE)
cond_avgs[4,] <- colMeans(similarity_temp[[17]][,1:14],na.rm=TRUE)
cond_avgs$group <- factor(names(similarity_temp)[14:17])
colnames(cond_avgs)[1:14] <- c(1:14)
se_avgs <- data.frame(matrix(nrow=4,ncol=14))
se_avgs[1,] <- sapply(similarity_temp[[14]][,1:14],se)
se_avgs[2,] <- sapply(similarity_temp[[15]][,1:14],se)
se_avgs[3,] <- sapply(similarity_temp[[16]][,1:14],se)
se_avgs[4,] <- sapply(similarity_temp[[17]][,1:14],se)
se_avgs$group <- factor(names(similarity_temp)[14:17])
colnames(se_avgs)[1:14] <- c(1:14)
cond_melt <- melt(cond_avgs,id_vars=c("group"))
## Using group as id variables
colnames(cond_melt) <- c("group", "TR", "similarity")
cond_melt$TR <- as.numeric(as.character(cond_melt$TR))
se_melt <- melt(se_avgs,id.vars="group")
colnames(se_melt) <- c("group", "TR", "se")
se_melt$TR <- as.numeric(as.character(se_melt$TR))
melt_avg_data <- merge(cond_melt,se_melt,by=c("group","TR"))
melt_avg_data$se_min <- melt_avg_data$similarity-melt_avg_data$se
melt_avg_data$se_max <- melt_avg_data$similarity+melt_avg_data$se
First, we just want to plot over time, to see if there is some sort of meaningful increases with our task. There does appear to be - TR 5 is encoding period, TR 8 is peak delay period. Interestingly, there isn’t a peak of similiarity in the probe period.
We also can see a pattern in the split across trial type - we see most similarity in the high load trials. It’s interesting to me that even in the incorrect high load trials, there is higher inter-trial similarity than in the correct low-load trials. But could this be because of the visual differences in the trial types?
It does seem like this measure is useful for distinguishing between both load and accuracy, even when corrected for multiple comparisons. We can distinguish between correct and incorrect trials at high load at TRs 4-6 and 8, at low load for TRs 1, 2, 4-14.
We can distinguish between load for correct trials at TRs 3-7, and for incorrect trials at TRs 1, 2, 4-10, 12 - 14.
ggplot(data=melt_avg_data,aes(x=TR,y=similarity))+
geom_line(aes(color=group)) +
geom_ribbon(aes(ymin=se_min,ymax=se_max,fill=group),alpha=0.2)+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle("Intertrial similarity averaged over all subjects")+
theme_classic()
corrected_p_val <- 0.05/14
low_load_acc_test <- data.frame(matrix(nrow=3,ncol=14))
colnames(low_load_acc_test) <- paste("TR_",c(1:14))
rownames(low_load_acc_test) <- c("t","p","corrected_p")
high_load_acc_test <- data.frame(matrix(nrow=3,ncol=14))
colnames(high_load_acc_test) <- paste("TR_",c(1:14))
rownames(high_load_acc_test) <- c("t","p","corrected_sig")
for (time in seq.int(1,14)){
low_test <- t.test(similarity_temp[[16]][,time],similarity_temp[[17]][,time],paired=TRUE)
high_test <- t.test(similarity_temp[[14]][,time],similarity_temp[[15]][,time],paired=TRUE)
low_load_acc_test[1,time] <- low_test$statistic
low_load_acc_test[2,time] <- low_test$p.value
if (low_test$p.value < corrected_p_val){low_load_acc_test[3,time] <- 1}else{low_load_acc_test[3,time] <- 0}
high_load_acc_test[1,time] <- high_test$statistic
high_load_acc_test[2,time] <- high_test$p.value
if (high_test$p.value < corrected_p_val){high_load_acc_test[3,time] <- 1}else{high_load_acc_test[3,time] <- 0}
}
high_load_acc_test %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "t-test between correct and incorrect values at each time point, high load trials" = 14)))
| TR_ 1 | TR_ 2 | TR_ 3 | TR_ 4 | TR_ 5 | TR_ 6 | TR_ 7 | TR_ 8 | TR_ 9 | TR_ 10 | TR_ 11 | TR_ 12 | TR_ 13 | TR_ 14 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| t | 2.3972061 | 1.8623274 | 2.4148361 | 5.5055990 | 5.4211117 | 6.082552 | 1.693629 | 3.2985253 | 2.728558 | 0.9517419 | 2.3789578 | 2.807815 | 2.8645213 | 2.2879091 |
| p | 0.0176113 | 0.0642929 | 0.0168091 | 0.0000001 | 0.0000002 | 0.000000 | 0.092178 | 0.0011851 | 0.007034 | 0.3425871 | 0.0184771 | 0.005575 | 0.0047064 | 0.0233819 |
| corrected_sig | 0.0000000 | 0.0000000 | 0.0000000 | 1.0000000 | 1.0000000 | 1.000000 | 0.000000 | 1.0000000 | 0.000000 | 0.0000000 | 0.0000000 | 0.000000 | 0.0000000 | 0.0000000 |
low_load_acc_test %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "t-test between correct and incorrect values at each time point, low load trials" = 14)))
| TR_ 1 | TR_ 2 | TR_ 3 | TR_ 4 | TR_ 5 | TR_ 6 | TR_ 7 | TR_ 8 | TR_ 9 | TR_ 10 | TR_ 11 | TR_ 12 | TR_ 13 | TR_ 14 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| t | 5.1068252 | 3.9361140 | 2.4773292 | 4.9781700 | 8.494834 | 3.6088515 | 6.018626 | 7.974373 | 7.001935 | 5.1093126 | 4.0530884 | 6.025036 | 5.6609584 | 5.0296251 |
| p | 0.0000014 | 0.0001442 | 0.0147305 | 0.0000023 | 0.000000 | 0.0004612 | 0.000000 | 0.000000 | 0.000000 | 0.0000013 | 0.0000936 | 0.000000 | 0.0000001 | 0.0000019 |
| corrected_p | 1.0000000 | 1.0000000 | 0.0000000 | 1.0000000 | 1.000000 | 1.0000000 | 1.000000 | 1.000000 | 1.000000 | 1.0000000 | 1.0000000 | 1.000000 | 1.0000000 | 1.0000000 |
correct_load_test <- data.frame(matrix(nrow=3,ncol=14))
colnames(correct_load_test) <- paste("TR_",c(1:14))
rownames(correct_load_test) <- c("t","p","corrected_p")
inccorrect_load_test <- data.frame(matrix(nrow=3,ncol=14))
colnames(inccorrect_load_test) <- paste("TR_",c(1:14))
rownames(inccorrect_load_test) <- c("t","p","corrected_p")
for (time in seq.int(1,14)){
correct_test <- t.test(similarity_temp[[14]][,time],similarity_temp[[16]][,time],paired=TRUE)
incorrect_test <- t.test(similarity_temp[[15]][,time],similarity_temp[[17]][,time],paired=TRUE)
correct_load_test[1,time] <- correct_test$statistic
correct_load_test[2,time] <- correct_test$p.value
if (correct_test$p.value < corrected_p_val){correct_load_test[3,time] <- 1}else{correct_load_test[3,time] <- 0}
inccorrect_load_test[1,time] <- incorrect_test$statistic
inccorrect_load_test[2,time] <- incorrect_test$p.value
if (incorrect_test$p.value < corrected_p_val){inccorrect_load_test[3,time] <- 1}else{inccorrect_load_test[3,time] <- 0}
}
correct_load_test %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "t-test between high and low loads at each time point, correct trials" = 14)))
| TR_ 1 | TR_ 2 | TR_ 3 | TR_ 4 | TR_ 5 | TR_ 6 | TR_ 7 | TR_ 8 | TR_ 9 | TR_ 10 | TR_ 11 | TR_ 12 | TR_ 13 | TR_ 14 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| t | 1.7949581 | 0.9966598 | 3.3287628 | 11.21938 | 15.70624 | 15.78625 | 3.0118226 | -0.4521697 | 0.0242294 | -0.7152968 | 0.2766772 | 1.0200597 | 1.1948411 | 1.094814 |
| p | 0.0744474 | 0.3203545 | 0.0010708 | 0.00000 | 0.00000 | 0.00000 | 0.0029958 | 0.6517266 | 0.9806982 | 0.4754127 | 0.7823660 | 0.3091582 | 0.2338233 | 0.275156 |
| corrected_p | 0.0000000 | 0.0000000 | 1.0000000 | 1.00000 | 1.00000 | 1.00000 | 1.0000000 | 0.0000000 | 0.0000000 | 0.0000000 | 0.0000000 | 0.0000000 | 0.0000000 | 0.000000 |
inccorrect_load_test %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "t-test between high and low loads at each time point, incorrect trials" = 14)))
| TR_ 1 | TR_ 2 | TR_ 3 | TR_ 4 | TR_ 5 | TR_ 6 | TR_ 7 | TR_ 8 | TR_ 9 | TR_ 10 | TR_ 11 | TR_ 12 | TR_ 13 | TR_ 14 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| t | 4.7608971 | 3.7080176 | 2.696950 | 5.5892848 | 11.71409 | 7.428207 | 4.9743285 | 5.6745823 | 5.7259048 | 3.8951079 | 2.5802842 | 5.174988 | 4.5173524 | 4.3885746 |
| p | 0.0000058 | 0.0003266 | 0.008079 | 0.0000002 | 0.00000 | 0.000000 | 0.0000024 | 0.0000001 | 0.0000001 | 0.0001674 | 0.0111653 | 0.000001 | 0.0000156 | 0.0000259 |
| corrected_p | 1.0000000 | 1.0000000 | 0.000000 | 1.0000000 | 1.00000 | 1.000000 | 1.0000000 | 1.0000000 | 1.0000000 | 1.0000000 | 0.0000000 | 1.000000 | 1.0000000 | 1.0000000 |
split_similarity <- list()
split_sim_avgs <- list()
for (i in seq.int(14,17)){
split_similarity[[names(similarity_temp)[i]]] <- split_into_groups(similarity_temp[[i]],WM_groups)
colnames(split_similarity[[i-13]][["all"]])[1:14] <- c(1:14)
for (level in seq.int(1,3)){
temp_data <- data.frame(mean=colMeans(split_similarity[[i-13]][[level]][1:14],na.rm=TRUE),se = sapply(split_similarity[[i-13]][[level]][1:14],se),
se_min = colMeans(split_similarity[[i-13]][[level]][1:14],na.rm=TRUE) - sapply(split_similarity[[i-13]][[level]][1:14],se),
se_max = colMeans(split_similarity[[i-13]][[level]][1:14],na.rm=TRUE) + sapply(split_similarity[[i-13]][[level]][1:14],se))
split_sim_avgs[[names(split_similarity)[i-13]]][[names(split_similarity[[i-13]])[level]]] <- data.frame((temp_data))
split_sim_avgs[[i-13]][[level]]$group <- rep(names(split_similarity[[i-13]])[level],14)
split_sim_avgs[[i-13]][[level]]$TR <- seq.int(1,14)
}
split_sim_avgs[[i-13]][["all"]] <- rbind(split_sim_avgs[[i-13]][["high"]],split_sim_avgs[[i-13]][["med"]],split_sim_avgs[[i-13]][["low"]])
split_sim_avgs[[i-13]][["all"]]$group <- factor(split_sim_avgs[[i-13]][["all"]]$group, levels=c("high","med","low"))
}
The only thing that pops out here for me is that during the delay period (around TR 8), the high capacity group shows more similarity than the medium or low capacity groups, particularly for the high load trials. Also interesting to note that for the high load incorrect trials, we still see peaks that make sense with our task, while for the low load incorrect trial, there isn’t really any informative shape to the curves.
sim_plots <- list()
for (i in seq.int(1,4)){
sim_plots[[i]] <- ggplot(data = split_sim_avgs[[i]][["all"]])+
geom_line(aes(x=TR,y=mean,color=group))+
geom_ribbon(aes(x=TR,ymin=se_min,ymax=se_max,fill=group),alpha=0.2)+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(split_sim_avgs)[i])+
ylab("Mean Similarity")+
theme_classic()
}
(sim_plots[[1]] + sim_plots[[2]]) / (sim_plots[[3]] + sim_plots[[4]])+
plot_layout(guides = "collect")+
plot_annotation(title="Inter-trial Similarity")
data_to_plot <- merge(constructs_fMRI,p200_data,by="PTID")
data_to_plot <- merge(data_to_plot,p200_clinical_zscores,by="PTID")
data_to_plot <- data_to_plot[,c(1,6,7,13,14,40,41)]
data_to_plot$ACC_LE <- data_to_plot$XDFR_MRI_ACC_L3 - data_to_plot$XDFR_MRI_ACC_L1
corr_to_behav_plots <- list()
for (i in seq.int(14,17)){
measure_by_time <- data.frame(matrix(nrow=4,ncol=14))
for (measure in seq.int(3,6)){
for (TR in seq.int(1,14)){
measure_by_time[measure-2,TR] <- cor(data_to_plot[,measure],similarity_temp[[i]][,TR],use = "pairwise.complete.obs")
}
}
measure_by_time <- data.frame(t(measure_by_time))
measure_by_time$TR <- seq.int(1,14)
colnames(measure_by_time)[1:4] <- colnames(data_to_plot)[3:6]
melted_measure_by_time <- melt(measure_by_time,id.vars="TR")
corr_to_behav_plots[[names(similarity_temp)[i]]] <- ggplot(data=melted_measure_by_time,aes(x=TR,y=value))+
geom_line(aes(color=variable))+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
theme_classic()
}
If we just look at the correlation over time for a number of behavioral and clinical measures, the most obvious thing to note is that the similarity, overall, had the highest correlation with accuracy, especially in encoding. Also note that the correlation with omnibus span peaks in the delay period, though it is overall a weaker correlation. Similarly, see a relatively weak negative correlation with BPRS total in the encoding and delay periods. Also interestingly, accuracy at low load is negatively correlated with similarity in the low load inccorect trials - that is, the stronger the correlations between a incorrect trials and the correct template in an individual, the more poorly they do overall. This finding should be interpreted with caution though, because there are few incorrect low load trials overall.
(corr_to_behav_plots[[1]] + corr_to_behav_plots[[2]]) / (corr_to_behav_plots[[3]] + corr_to_behav_plots[[4]])+
plot_layout(guides="collect")+
plot_annotation(title = "Correlation between inter-trial similarity and behavioral measure")
scatter_plots_delay <- list()
scatter_plots_cue <- list()
scatter_plots_probe <- list()
for (i in seq.int(14,17)){
temp_plot_data <- merge(data_to_plot,similarity_temp[[i]],by="PTID")
scatter_plots_delay[[names(similarity_temp)[i]]][["omnibus"]] <- ggplot(data=temp_plot_data)+
geom_point(aes(x=omnibus_span_no_DFR_MRI,y=X8))+
stat_smooth(aes(x=omnibus_span_no_DFR_MRI,y=X8),method="lm")+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
ylab("Inter-trial similarity")+
theme_classic()
scatter_plots_delay[[names(similarity_temp)[i]]][["BPRS"]] <- ggplot(data=temp_plot_data)+
geom_point(aes(x=BPRS_TOT.x,y=X8))+
stat_smooth(aes(x=BPRS_TOT.x,y=X8),method="lm")+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
ylab("Inter-trial similarity")+
theme_classic()
scatter_plots_delay[[names(similarity_temp)[i]]][["L3_acc"]] <- ggplot(data=temp_plot_data)+
geom_point(aes(x=XDFR_MRI_ACC_L3,y=X8))+
stat_smooth(aes(x=XDFR_MRI_ACC_L3,y=X8),method="lm")+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
ylab("Inter-trial similarity")+
theme_classic()
scatter_plots_cue[[names(similarity_temp)[i]]][["omnibus"]] <- ggplot(data=temp_plot_data)+
geom_point(aes(x=omnibus_span_no_DFR_MRI,y=X6))+
stat_smooth(aes(x=omnibus_span_no_DFR_MRI,y=X6),method="lm")+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
ylab("Inter-trial similarity")+
theme_classic()
scatter_plots_cue[[names(similarity_temp)[i]]][["BPRS"]] <- ggplot(data=temp_plot_data)+
geom_point(aes(x=BPRS_TOT.x,y=X6))+
stat_smooth(aes(x=BPRS_TOT.x,y=X6),method="lm")+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
ylab("Inter-trial similarity")+
theme_classic()
scatter_plots_cue[[names(similarity_temp)[i]]][["L3_acc"]] <- ggplot(data=temp_plot_data)+
geom_point(aes(x=XDFR_MRI_ACC_L3,y=X6))+
stat_smooth(aes(x=XDFR_MRI_ACC_L3,y=X6),method="lm")+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
ylab("Inter-trial similarity")+
theme_classic()
scatter_plots_probe[[names(similarity_temp)[i]]][["omnibus"]] <- ggplot(data=temp_plot_data)+
geom_point(aes(x=omnibus_span_no_DFR_MRI,y=X11))+
stat_smooth(aes(x=omnibus_span_no_DFR_MRI,y=X11),method="lm")+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
ylab("Inter-trial similarity")+
theme_classic()
scatter_plots_probe[[names(similarity_temp)[i]]][["BPRS"]] <- ggplot(data=temp_plot_data)+
geom_point(aes(x=BPRS_TOT.x,y=X11))+
stat_smooth(aes(x=BPRS_TOT.x,y=X11),method="lm")+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
ylab("Inter-trial similarity")+
theme_classic()
scatter_plots_probe[[names(similarity_temp)[i]]][["L3_acc"]] <- ggplot(data=temp_plot_data)+
geom_point(aes(x=XDFR_MRI_ACC_L3,y=X11))+
stat_smooth(aes(x=XDFR_MRI_ACC_L3,y=X11),method="lm")+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
ylab("Inter-trial similarity")+
theme_classic()
}
The main point of these analyses were to make sure that we weren’t missing any non-linear relationships that would be obscured in a linear correlation. We don’t really see any of these. The only significant linear correlations are those with accuracy.
(scatter_plots_cue[[1]][["omnibus"]] + scatter_plots_cue[[2]][["omnibus"]]) /
(scatter_plots_cue[[3]][["omnibus"]] + scatter_plots_cue[[4]][["omnibus"]])+
plot_layout(guides="collect")+
plot_annotation(title = "Omnibus span vs inter-trial similiarity - encoding")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
cor.test(similarity_temp[["high_correct_avg"]]$X5,data_to_plot$omnibus_span_no_DFR_MRI)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_correct_avg"]]$X5 and data_to_plot$omnibus_span_no_DFR_MRI
## t = 1.0422, df = 168, p-value = 0.2988
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.07122842 0.22791004
## sample estimates:
## cor
## 0.08014506
cor.test(similarity_temp[["high_incorrect_avg"]]$X5,data_to_plot$omnibus_span_no_DFR_MRI)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_incorrect_avg"]]$X5 and data_to_plot$omnibus_span_no_DFR_MRI
## t = 1.3057, df = 168, p-value = 0.1935
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.05105816 0.24701425
## sample estimates:
## cor
## 0.1002263
(scatter_plots_cue[[1]][["BPRS"]] + scatter_plots_cue[[2]][["BPRS"]]) /
(scatter_plots_cue[[3]][["BPRS"]] + scatter_plots_cue[[4]][["BPRS"]])+
plot_layout(guides="collect")+
plot_annotation(title = "BPRS span vs inter-trial similiarity - encoding")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
cor.test(similarity_temp[["high_correct_avg"]]$X5,data_to_plot$BPRS_TOT.x)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_correct_avg"]]$X5 and data_to_plot$BPRS_TOT.x
## t = -1.2081, df = 167, p-value = 0.2287
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.24065781 0.05870559
## sample estimates:
## cor
## -0.09307933
cor.test(similarity_temp[["high_incorrect_avg"]]$X5,data_to_plot$BPRS_TOT.x)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_incorrect_avg"]]$X5 and data_to_plot$BPRS_TOT.x
## t = -0.56576, df = 167, p-value = 0.5723
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.1934207 0.1079349
## sample estimates:
## cor
## -0.04373779
(scatter_plots_cue[[1]][["L3_acc"]] + scatter_plots_cue[[2]][["L3_acc"]]) /
(scatter_plots_cue[[3]][["L3_acc"]] + scatter_plots_cue[[4]][["L3_acc"]])+
plot_layout(guides="collect")+
plot_annotation(title = "L3_acc span vs inter-trial similiarity - encoding")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
cor.test(similarity_temp[["high_correct_avg"]]$X5,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_correct_avg"]]$X5 and data_to_plot$XDFR_MRI_ACC_L3
## t = 3.8309, df = 168, p-value = 0.0001801
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.1388500 0.4161991
## sample estimates:
## cor
## 0.2834407
cor.test(similarity_temp[["high_incorrect_avg"]]$X5,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_incorrect_avg"]]$X5 and data_to_plot$XDFR_MRI_ACC_L3
## t = 3.3638, df = 168, p-value = 0.0009523
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.1046443 0.3870809
## sample estimates:
## cor
## 0.251202
cor.test(similarity_temp[["low_correct_avg"]]$X5,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["low_correct_avg"]]$X5 and data_to_plot$XDFR_MRI_ACC_L3
## t = 2.8135, df = 168, p-value = 0.005486
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.06364264 0.35141913
## sample estimates:
## cor
## 0.2121249
cor.test(similarity_temp[["low_incorrect_avg"]]$X5,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["low_incorrect_avg"]]$X5 and data_to_plot$XDFR_MRI_ACC_L3
## t = 0.23759, df = 111, p-value = 0.8126
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.1628625 0.2064158
## sample estimates:
## cor
## 0.02254565
No non-linear here, either. The linear relationships with accuracy all are statistically significant, though the relationship with omnibus span only trends towards significance.
(scatter_plots_delay[[1]][["omnibus"]] + scatter_plots_delay[[2]][["omnibus"]]) /
(scatter_plots_delay[[3]][["omnibus"]] + scatter_plots_delay[[4]][["omnibus"]])+
plot_layout(guides="collect")+
plot_annotation(title = "Omnibus span vs inter-trial similiarity - delay")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
cor.test(similarity_temp[["high_correct_avg"]]$X8,data_to_plot$omnibus_span_no_DFR_MRI)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_correct_avg"]]$X8 and data_to_plot$omnibus_span_no_DFR_MRI
## t = 1.8897, df = 168, p-value = 0.06053
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.006387842 0.288514575
## sample estimates:
## cor
## 0.1442651
(scatter_plots_delay[[1]][["BPRS"]] + scatter_plots_delay[[2]][["BPRS"]]) /
(scatter_plots_delay[[3]][["BPRS"]] + scatter_plots_delay[[4]][["BPRS"]])+
plot_layout(guides="collect")+
plot_annotation(title = "BPRS vs inter-trial similiarity - delay")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
cor.test(similarity_temp[["high_correct_avg"]]$X8,data_to_plot$BPRS_TOT.x)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_correct_avg"]]$X8 and data_to_plot$BPRS_TOT.x
## t = -1.5374, df = 167, p-value = 0.1261
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.26438049 0.03342069
## sample estimates:
## cor
## -0.1181354
cor.test(similarity_temp[["high_incorrect_avg"]]$X8,data_to_plot$BPRS_TOT.x)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_incorrect_avg"]]$X8 and data_to_plot$BPRS_TOT.x
## t = 0.10242, df = 167, p-value = 0.9185
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.1432065 0.1586951
## sample estimates:
## cor
## 0.007924895
(scatter_plots_delay[[1]][["L3_acc"]] + scatter_plots_delay[[2]][["L3_acc"]]) /
(scatter_plots_delay[[3]][["L3_acc"]] + scatter_plots_delay[[4]][["L3_acc"]])+
plot_layout(guides="collect")+
plot_annotation(title = "L3_acc vs inter-trial similiarity - delay")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
cor.test(similarity_temp[["high_correct_avg"]]$X8,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_correct_avg"]]$X8 and data_to_plot$XDFR_MRI_ACC_L3
## t = 2.5743, df = 168, p-value = 0.01091
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.0456270 0.3354812
## sample estimates:
## cor
## 0.1948034
cor.test(similarity_temp[["high_incorrect_avg"]]$X8,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_incorrect_avg"]]$X8 and data_to_plot$XDFR_MRI_ACC_L3
## t = 2.5362, df = 168, p-value = 0.01212
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.04275452 0.33292450
## sample estimates:
## cor
## 0.192033
cor.test(similarity_temp[["low_correct_avg"]]$X8,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["low_correct_avg"]]$X8 and data_to_plot$XDFR_MRI_ACC_L3
## t = 2.2986, df = 168, p-value = 0.02276
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.02475484 0.31680625
## sample estimates:
## cor
## 0.1746185
cor.test(similarity_temp[["low_incorrect_avg"]]$X8,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["low_incorrect_avg"]]$X8 and data_to_plot$XDFR_MRI_ACC_L3
## t = -0.91105, df = 111, p-value = 0.3642
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.2666375 0.1001729
## sample estimates:
## cor
## -0.08615111
Here, we’re also seeing some correlations with accuracy for correct trials (irrespective of load) and with BPRS at low load correct trials.
(scatter_plots_probe[[1]][["omnibus"]] + scatter_plots_probe[[2]][["omnibus"]]) /
(scatter_plots_probe[[3]][["omnibus"]] + scatter_plots_probe[[4]][["omnibus"]])+
plot_layout(guides="collect")+
plot_annotation(title = "Omnibus span vs inter-trial similiarity - probe")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
cor.test(similarity_temp[["high_correct_avg"]]$X11,data_to_plot$omnibus_span_no_DFR_MRI)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_correct_avg"]]$X11 and data_to_plot$omnibus_span_no_DFR_MRI
## t = 1.2632, df = 168, p-value = 0.2083
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.05431093 0.24394899
## sample estimates:
## cor
## 0.09699623
(scatter_plots_probe[[1]][["BPRS"]] + scatter_plots_probe[[2]][["BPRS"]]) /
(scatter_plots_probe[[3]][["BPRS"]] + scatter_plots_probe[[4]][["BPRS"]])+
plot_layout(guides="collect")+
plot_annotation(title = "BPRS vs inter-trial similiarity - probe")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
cor.test(similarity_temp[["low_correct_avg"]]$X11,data_to_plot$BPRS_TOT.x)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["low_correct_avg"]]$X11 and data_to_plot$BPRS_TOT.x
## t = 2.2522, df = 167, p-value = 0.02561
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.02128244 0.31449955
## sample estimates:
## cor
## 0.1716909
(scatter_plots_probe[[1]][["L3_acc"]] + scatter_plots_probe[[2]][["L3_acc"]]) /
(scatter_plots_probe[[3]][["L3_acc"]] + scatter_plots_probe[[4]][["L3_acc"]])+
plot_layout(guides="collect")+
plot_annotation(title = "L3_acc vs inter-trial similiarity - probe")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
cor.test(similarity_temp[["high_correct_avg"]]$X11,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_correct_avg"]]$X11 and data_to_plot$XDFR_MRI_ACC_L3
## t = 3.6306, df = 168, p-value = 0.0003751
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.1242588 0.4038472
## sample estimates:
## cor
## 0.2697284
cor.test(similarity_temp[["high_incorrect_avg"]]$X11,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_incorrect_avg"]]$X11 and data_to_plot$XDFR_MRI_ACC_L3
## t = 1.5842, df = 168, p-value = 0.115
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.02973581 0.26696082
## sample estimates:
## cor
## 0.1213214
cor.test(similarity_temp[["low_correct_avg"]]$X11,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["low_correct_avg"]]$X11 and data_to_plot$XDFR_MRI_ACC_L3
## t = 2.6079, df = 168, p-value = 0.009931
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.0481640 0.3377357
## sample estimates:
## cor
## 0.1972483
cor.test(similarity_temp[["low_incorrect_avg"]]$X11,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["low_incorrect_avg"]]$X11 and data_to_plot$XDFR_MRI_ACC_L3
## t = 0.43986, df = 111, p-value = 0.6609
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.1441271 0.2247116
## sample estimates:
## cor
## 0.04171328
The analysis we were looking at before only looked within a single TR - now, let’s look at the fidelity across TRs representing encoding and delay, and how that relates to different behavioral measures. We’ll look at 3 different things - first, just comparison across any given trial. Then, we’ll look at encoding vs correct average delay, and delay vs correct average encoding.
encoding_to_delay_plots <- list()
for (i in c(6,10,12)){
colnames(similarity_temp[[i]]) <- unlist(similarity_temp[[1]][[1]])
similarity_temp[[i]][similarity_temp[[i]]==0] <- NA
temp_plot_data <- cbind.data.frame(data_to_plot,similarity_temp[[i]])
encoding_to_delay_plots[[names(similarity_temp)[i]]][["omnibus"]][["low_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`low load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Incorrect trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["omnibus"]][["high_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`high load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Incorrect trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["omnibus"]][["low_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`low load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Correct trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["omnibus"]][["high_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`high load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Correct trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["low_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`low load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Incorrect trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["high_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`high load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Incorrect trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["low_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`low load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Correct trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["high_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`high load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Correct trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["BPRS"]][["low_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`low load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Incorrect trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["BPRS"]][["high_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`high load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Incorrect trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["BPRS"]][["low_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`low load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Correct trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["BPRS"]][["high_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`high load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Correct trials")+
theme_classic()
}
temp_plot_data <- cbind.data.frame(data_to_plot,similarity_temp[["correct_encoding_to_correct_delay"]])
colnames(temp_plot_data)[9] <- "correct_encoding_delay"
encoding_to_delay_plots[["correct_encoding_to_correct_delay"]][["omnibus"]] <- ggplot(data =
temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=correct_encoding_delay))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Template encoding/delay vs Omnibus Span")+
theme_classic()
encoding_to_delay_plots[["correct_encoding_to_correct_delay"]][["BPRS"]] <- ggplot(data =
temp_plot_data,aes(x=BPRS_TOT.x,y=correct_encoding_delay))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Template encoding/delay vs BPRS")+
theme_classic()
encoding_to_delay_plots[["correct_encoding_to_correct_delay"]][["L3_Acc"]] <- ggplot(data =
temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=correct_encoding_delay))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Template encoding/delay vs L3 accuracy")+
theme_classic()
In these, graphs, if a correlation is black, it is below p < 0.05; if it is not, it is grey.
correlations = list()
for (i in c(6,10,12)){
colnames(similarity_temp[[i]]) <- unlist(similarity_temp[[1]][[1]])
temp_list <- list(r = data.frame(matrix(nrow=4,ncol=6)), p = data.frame(matrix(nrow=4,ncol=6)))
for (behav in seq.int(2,7)){
for (sim in seq.int(1,4)){
temp_corr <- cor.test(similarity_temp[[i]][,sim],data_to_plot[,behav])
temp_list[["r"]][sim,behav-1] <- temp_corr$estimate
temp_list[["p"]][sim,behav-1] <- temp_corr$p.value
}
}
colnames(temp_list[["r"]]) <- colnames(data_to_plot)[2:7]
rownames(temp_list[["r"]]) <- colnames(similarity_temp[[i]])
colnames(temp_list[["p"]]) <- colnames(data_to_plot)[2:7]
rownames(temp_list[["p"]]) <- colnames(similarity_temp[[i]])
correlations[[names(similarity_temp)[i]]] <- temp_list
}
temp <- data.frame(r=matrix(nrow=6,ncol=1),p=matrix(nrow=6,ncol=1))
rownames(temp) <- colnames(data_to_plot)[2:7]
for (behav in seq.int(2,7)){
temp_corr <- cor.test(similarity_temp[["correct_encoding_to_correct_delay"]],data_to_plot[,behav])
temp$r[behav-1] <- temp_corr$estimate
temp$p[behav-2] <- temp_corr$p.value
}
correlations[["correct_encoding_to_correct_delay"]] <- temp
Here, taking the correlation between multivariate representation at encoding (TR 5) and delay (TR 8) on each trial.
This is definitely the messiest - there seem to be slight negative correlations between omnibus span and similiarity in correct trials (though this is not statistically significant), and definitely a negative correlation between inter-trial similarity at high load and accuracy. Also interesting to see (though not shown here, is that high load accuracy only correlates with similarity at high load, and low load accuracy only correlates to similarity at low load) Both of these suggest that the less similiar encoding and delay period are, the higher capacity and better performance an individual has, particularly on high load trials.
correlations[["encoding_to_delay_avg"]][["r"]] %>%
mutate(
condition = row.names(.),
omnibus_span_no_DFR_MRI = cell_spec(omnibus_span_no_DFR_MRI, "html",
color =ifelse(correlations[["encoding_to_delay_avg"]][["p"]]$omnibus_span_no_DFR_MRI < 0.05, "black", "grey")),
XDFR_MRI_ACC_L3 = cell_spec(XDFR_MRI_ACC_L3, "html",
color =ifelse(correlations[["encoding_to_delay_avg"]][["p"]]$XDFR_MRI_ACC_L3 < 0.05, "black", "grey")),
BPRS_TOT.x = cell_spec(BPRS_TOT.x, "html",
color =ifelse(correlations[["encoding_to_delay_avg"]][["p"]]$BPRS_TOT.x < 0.05, "black", "grey"))
) %>%
select(condition,omnibus_span_no_DFR_MRI,XDFR_MRI_ACC_L3,BPRS_TOT.x) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "Individual Encoding to Individual Delay" = 3)))
| condition | omnibus_span_no_DFR_MRI | XDFR_MRI_ACC_L3 | BPRS_TOT.x |
|---|---|---|---|
| low load incorrect | 0.0242637091035295 | 0.0232742994042766 | -0.123810779509404 |
| high load incorrect | 0.0196214074574858 | -0.166333549374184 | 0.0269010595004991 |
| low load correct | -0.0138011463724424 | -0.0884899423553457 | -0.0874632578173017 |
| high load correct | -0.081886675592525 | -0.203089348295182 | 0.0389720246240788 |
An important check is to see whether these measures that we’re calculating have any behavioral relevance. To look at this, we’re going to run paired t-tests to look at whether we can distinguish load across correct trials and accuracy across high load trials for each of the measures we calculate.
This measure can’t distinguish either load or accuracy.
encoding_to_delay_avg <- fisherz(similarity_temp[["encoding_to_delay_avg"]])
t.test(encoding_to_delay_avg[,4],encoding_to_delay_avg[,2],paired=TRUE)
##
## Paired t-test
##
## data: encoding_to_delay_avg[, 4] and encoding_to_delay_avg[, 2]
## t = -0.91418, df = 169, p-value = 0.3619
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.02871061 0.01053602
## sample estimates:
## mean of the differences
## -0.009087295
t.test(encoding_to_delay_avg[,4],encoding_to_delay_avg[,3],paired=TRUE)
##
## Paired t-test
##
## data: encoding_to_delay_avg[, 4] and encoding_to_delay_avg[, 3]
## t = -0.68679, df = 169, p-value = 0.4932
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.03262912 0.01578560
## sample estimates:
## mean of the differences
## -0.008421763
(encoding_to_delay_plots[["encoding_to_delay_avg"]][["omnibus"]][["high_load_correct"]] + encoding_to_delay_plots[["encoding_to_delay_avg"]][["omnibus"]][["high_load_incorrect"]]) /
(encoding_to_delay_plots[["encoding_to_delay_avg"]][["omnibus"]][["low_load_correct"]] +
encoding_to_delay_plots[["encoding_to_delay_avg"]][["omnibus"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs omnibus")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_delay_plots[["encoding_to_delay_avg"]][["L3_Acc"]][["high_load_correct"]] + encoding_to_delay_plots[["encoding_to_delay_avg"]][["L3_Acc"]][["high_load_incorrect"]]) /
(encoding_to_delay_plots[["encoding_to_delay_avg"]][["L3_Acc"]][["low_load_correct"]] +
encoding_to_delay_plots[["encoding_to_delay_avg"]][["L3_Acc"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs High Load Accuracy")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_delay_plots[["encoding_to_delay_avg"]][["BPRS"]][["high_load_correct"]] + encoding_to_delay_plots[["encoding_to_delay_avg"]][["BPRS"]][["high_load_incorrect"]]) /
(encoding_to_delay_plots[["encoding_to_delay_avg"]][["BPRS"]][["low_load_correct"]] +
encoding_to_delay_plots[["encoding_to_delay_avg"]][["BPRS"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs BPRS Total")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
Here, we’re taking the delay period from a “template” (ie average of all correct trials) and correlating that to the encoding period of each trial. We see strong negative correlations with accuracy for all trial types except for low load incorrect trials, and positive correlations with BPRS for the high load trials (regardless of accuracy). This suggests that as there is stronger correlation between encoding on a given trial and the correct delay template, there are more psychiatric symptoms.
The relationship between omnibus span and similarity in the correct trials (irrespective of load) is trending towards significance.
correlations[["encoding_to_correct_delay_avg"]][["r"]] %>%
mutate(
condition = row.names(.),
omnibus_span_no_DFR_MRI = cell_spec(omnibus_span_no_DFR_MRI, "html",
color =ifelse(correlations[["encoding_to_correct_delay_avg"]][["p"]]$omnibus_span_no_DFR_MRI < 0.05, "black", "grey")),
XDFR_MRI_ACC_L3 = cell_spec(XDFR_MRI_ACC_L3, "html",
color =ifelse(correlations[["encoding_to_correct_delay_avg"]][["p"]]$XDFR_MRI_ACC_L3 < 0.05, "black", "grey")),
BPRS_TOT.x = cell_spec(BPRS_TOT.x, "html",
color =ifelse(correlations[["encoding_to_correct_delay_avg"]][["p"]]$BPRS_TOT.x < 0.05, "black", "grey"))
) %>%
select(condition,omnibus_span_no_DFR_MRI,XDFR_MRI_ACC_L3,BPRS_TOT.x) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "Individual Encoding to Template Delay" = 3)))
| condition | omnibus_span_no_DFR_MRI | XDFR_MRI_ACC_L3 | BPRS_TOT.x |
|---|---|---|---|
| low load incorrect | -0.143253325182348 | -0.219430855781915 | 0.135882061706973 |
| high load incorrect | -0.0715837210736226 | -0.194870744630673 | 0.185599361045007 |
| low load correct | -0.137040126258098 | -0.337025669157246 | 0.187409373936538 |
| high load correct | -0.14181434005323 | -0.342023439290818 | 0.192604510469676 |
This measure distinguishes both load and accuracy.
encoding_to_correct_delay_avg <- fisherz(similarity_temp[["encoding_to_correct_delay_avg"]])
t.test(encoding_to_correct_delay_avg[,4],encoding_to_correct_delay_avg[,2],paired=TRUE)
##
## Paired t-test
##
## data: encoding_to_correct_delay_avg[, 4] and encoding_to_correct_delay_avg[, 2]
## t = -3.9205, df = 169, p-value = 0.0001282
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.04637851 -0.01531431
## sample estimates:
## mean of the differences
## -0.03084641
t.test(encoding_to_correct_delay_avg[,4],encoding_to_correct_delay_avg[,3],paired=TRUE)
##
## Paired t-test
##
## data: encoding_to_correct_delay_avg[, 4] and encoding_to_correct_delay_avg[, 3]
## t = -12.307, df = 169, p-value < 2.2e-16
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.10590627 -0.07662617
## sample estimates:
## mean of the differences
## -0.09126622
(encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["omnibus"]][["high_load_correct"]] + encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["omnibus"]][["high_load_incorrect"]]) /
(encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["omnibus"]][["low_load_correct"]] +
encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["omnibus"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs omnibus")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["L3_Acc"]][["high_load_correct"]] + encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["L3_Acc"]][["high_load_incorrect"]]) /
(encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["L3_Acc"]][["low_load_correct"]] +
encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["L3_Acc"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs High Load Accuracy")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["BPRS"]][["high_load_correct"]] + encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["BPRS"]][["high_load_incorrect"]]) /
(encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["BPRS"]][["low_load_correct"]] +
encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["BPRS"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs BPRS Total")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
We’ve seen that there is a relationship between BPRS and the correlation between the individual encoding trial and the template delay, but we want to make sure it holds when we control for accuracy and span. Both of them still do!
data_for_reg <- data.frame(acc = data_to_plot$XDFR_MRI_ACC_L3,span = data_to_plot$omnibus_span_no_DFR_MRI, BPRS = data_to_plot$BPRS_TOT.x, ITC_correct = similarity_temp[["encoding_to_correct_delay_avg"]][,4], ITC_incorrect = similarity_temp[["encoding_to_correct_delay_avg"]][,2])
BPRS_correct.lm <- lm(BPRS ~ acc + span + ITC_correct,data = data_for_reg)
summary(BPRS_correct.lm)
##
## Call:
## lm(formula = BPRS ~ acc + span + ITC_correct, data = data_for_reg)
##
## Residuals:
## Min 1Q Median 3Q Max
## -10.6649 -4.9112 -0.7108 3.6352 22.7267
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 37.565 4.104 9.153 <2e-16 ***
## acc -1.465 5.791 -0.253 0.8006
## span -2.546 1.019 -2.498 0.0135 *
## ITC_correct 7.981 3.936 2.028 0.0442 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 6.846 on 165 degrees of freedom
## (1 observation deleted due to missingness)
## Multiple R-squared: 0.07646, Adjusted R-squared: 0.05967
## F-statistic: 4.554 on 3 and 165 DF, p-value: 0.004294
BPRS_incorrect.lm <- lm(BPRS ~ acc + span + ITC_incorrect,data = data_for_reg)
summary(BPRS_incorrect.lm)
##
## Call:
## lm(formula = BPRS ~ acc + span + ITC_incorrect, data = data_for_reg)
##
## Residuals:
## Min 1Q Median 3Q Max
## -11.8097 -5.1056 -0.6128 3.5993 21.3160
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 38.414 4.084 9.405 <2e-16 ***
## acc -2.905 5.592 -0.520 0.6041
## span -2.602 1.017 -2.560 0.0114 *
## ITC_incorrect 8.026 3.708 2.164 0.0319 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 6.835 on 165 degrees of freedom
## (1 observation deleted due to missingness)
## Multiple R-squared: 0.07958, Adjusted R-squared: 0.06285
## F-statistic: 4.755 on 3 and 165 DF, p-value: 0.003306
Interestingly, we see a slightly different pattern when we look at how the delay period of any given trial correlates to the correct encoding template. Omnibus span is positively correlated with similarity in the low load incorrect trials (though caution, few trials here), and negatively correlated with the high load correct trials. Accuracy is correlated to similarity at high load trials and correct low load trials, and there are significant positive correlations between BPRS and high load correct trials.
correlations[["correct_encoding_to_delay_avg"]][["r"]] %>%
mutate(
condition = row.names(.),
omnibus_span_no_DFR_MRI = cell_spec(omnibus_span_no_DFR_MRI, "html",
color =ifelse(correlations[["correct_encoding_to_delay_avg"]][["p"]]$omnibus_span_no_DFR_MRI < 0.05, "black", "grey")),
XDFR_MRI_ACC_L3 = cell_spec(XDFR_MRI_ACC_L3, "html",
color =ifelse(correlations[["correct_encoding_to_delay_avg"]][["p"]]$XDFR_MRI_ACC_L3 < 0.05, "black", "grey")),
BPRS_TOT.x = cell_spec(BPRS_TOT.x, "html",
color =ifelse(correlations[["correct_encoding_to_delay_avg"]][["p"]]$BPRS_TOT.x < 0.05, "black", "grey"))
) %>%
select(condition,omnibus_span_no_DFR_MRI,XDFR_MRI_ACC_L3,BPRS_TOT.x) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "Template Encoding to Delay" = 3)))
| condition | omnibus_span_no_DFR_MRI | XDFR_MRI_ACC_L3 | BPRS_TOT.x |
|---|---|---|---|
| low load incorrect | 0.142568841398427 | 0.0488176325213739 | -0.0421605373911986 |
| high load incorrect | -0.0449086927965139 | -0.236684645554992 | 0.0969031173550461 |
| low load correct | -0.137395065457336 | -0.261330225459152 | 0.135331930734418 |
| high load correct | -0.183668322006186 | -0.293690787439439 | 0.26670467177783 |
This measure distinguishes both load and accuracy.
correct_encoding_to_delay_avg <- fisherz(similarity_temp[["correct_encoding_to_delay_avg"]])
t.test(correct_encoding_to_delay_avg[,4],correct_encoding_to_delay_avg[,2],paired=TRUE)
##
## Paired t-test
##
## data: correct_encoding_to_delay_avg[, 4] and correct_encoding_to_delay_avg[, 2]
## t = -3.1517, df = 169, p-value = 0.001921
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.041176278 -0.009459724
## sample estimates:
## mean of the differences
## -0.025318
t.test(correct_encoding_to_delay_avg[,4],correct_encoding_to_delay_avg[,3],paired=TRUE)
##
## Paired t-test
##
## data: correct_encoding_to_delay_avg[, 4] and correct_encoding_to_delay_avg[, 3]
## t = 3.6653, df = 169, p-value = 0.0003305
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.01071411 0.03572743
## sample estimates:
## mean of the differences
## 0.02322077
(encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["omnibus"]][["high_load_correct"]] + encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["omnibus"]][["high_load_incorrect"]]) /
(encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["omnibus"]][["low_load_correct"]] +
encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["omnibus"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs omnibus")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["L3_Acc"]][["high_load_correct"]] + encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["L3_Acc"]][["high_load_incorrect"]]) /
(encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["L3_Acc"]][["low_load_correct"]] +
encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["L3_Acc"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs High Load Accuracy")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["BPRS"]][["high_load_correct"]] + encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["BPRS"]][["high_load_incorrect"]]) /
(encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["BPRS"]][["low_load_correct"]] +
encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["BPRS"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs BPRS Total")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
This one also does!
data_for_reg <- data.frame(acc = data_to_plot$XDFR_MRI_ACC_L3,span = data_to_plot$omnibus_span_no_DFR_MRI, BPRS = data_to_plot$BPRS_TOT.x, ITC_correct = similarity_temp[["correct_encoding_to_delay_avg"]][,4], ITC_incorrect = similarity_temp[["correct_encoding_to_delay_avg"]][,2])
BPRS_correct.lm <- lm(BPRS ~ acc + span + ITC_correct,data = data_for_reg)
summary(BPRS_correct.lm)
##
## Call:
## lm(formula = BPRS ~ acc + span + ITC_correct, data = data_for_reg)
##
## Residuals:
## Min 1Q Median 3Q Max
## -11.2373 -5.0868 -0.6637 3.0361 23.2875
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 37.6086 4.0400 9.309 < 2e-16 ***
## acc -0.6422 5.6362 -0.114 0.90943
## span -2.2965 1.0107 -2.272 0.02436 *
## ITC_correct 15.8865 5.3436 2.973 0.00339 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 6.752 on 165 degrees of freedom
## (1 observation deleted due to missingness)
## Multiple R-squared: 0.1016, Adjusted R-squared: 0.08524
## F-statistic: 6.218 on 3 and 165 DF, p-value: 0.0005007
Here, we just want to look at the correlation between the template encoding to template delay. In this analysis, we see significant negative correlation with omnibus span, negative correlation with accuracy, and a positive correlation with BPRS.
These relationships suggest that as span and accuracy increase, there is less similarity beween encoding and delay, but we see the opposite relationship with psychiatric symptoms.
correlations[["correct_encoding_to_correct_delay"]][c(2,3,5),] %>%
mutate(
condition = row.names(.),
r = cell_spec(r, "html",
color =ifelse(p < 0.05, "black", "grey"))
) %>%
select(condition,r) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above(c(" ", "Template Encoding to Template Delay" = 1))
| condition | r |
|---|---|
| omnibus_span_no_DFR_MRI | -0.162994555337693 |
| XDFR_MRI_ACC_L3 | -0.276758447584082 |
| BPRS_TOT.x | 0.20039919415834 |
encoding_to_delay_plots[["correct_encoding_to_correct_delay"]][["omnibus"]]
encoding_to_delay_plots[["correct_encoding_to_correct_delay"]][["L3_Acc"]]
encoding_to_delay_plots[["correct_encoding_to_correct_delay"]][["BPRS"]]
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
This one, however, does not.
data_for_reg <- data.frame(acc = data_to_plot$XDFR_MRI_ACC_L3,span = data_to_plot$omnibus_span_no_DFR_MRI, BPRS = data_to_plot$BPRS_TOT.x, ITC = similarity_temp[["correct_encoding_to_correct_delay"]])
BPRS.lm <- lm(BPRS ~ ITC + span + acc,data = data_for_reg)
summary(BPRS.lm)
##
## Call:
## lm(formula = BPRS ~ ITC + span + acc, data = data_for_reg)
##
## Residuals:
## Min 1Q Median 3Q Max
## -10.639 -4.930 -1.025 3.631 23.395
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 38.241 4.089 9.352 <2e-16 ***
## ITC 3.973 1.933 2.055 0.0415 *
## span -2.435 1.023 -2.381 0.0184 *
## acc -2.110 5.695 -0.371 0.7114
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 6.844 on 165 degrees of freedom
## (1 observation deleted due to missingness)
## Multiple R-squared: 0.07706, Adjusted R-squared: 0.06028
## F-statistic: 4.592 on 3 and 165 DF, p-value: 0.004084
Another theoretically interesting period is comparing encoding to probe. Here, we’ll do the same thing as above, but we’re going to take from TR 5 for encoding and TR 11 for probe.
encoding_to_probe_plots <- list()
for (i in c(7,11,13)){
colnames(similarity_temp[[i]]) <- unlist(similarity_temp[[1]][[1]])
similarity_temp[[i]][similarity_temp[[i]]==0] <- NA
temp_plot_data <- cbind.data.frame(data_to_plot,similarity_temp[[i]])
encoding_to_probe_plots[[names(similarity_temp)[i]]][["omnibus"]][["low_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`low load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Incorrect trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["omnibus"]][["high_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`high load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Incorrect trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["omnibus"]][["low_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`low load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Correct trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["omnibus"]][["high_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`high load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Correct trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["low_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`low load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Incorrect trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["high_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`high load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Incorrect trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["low_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`low load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Correct trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["high_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`high load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Correct trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["BPRS"]][["low_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`low load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Incorrect trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["BPRS"]][["high_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`high load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Incorrect trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["BPRS"]][["low_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`low load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Correct trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["BPRS"]][["high_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`high load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Correct trials")+
theme_classic()
}
temp_plot_data <- cbind.data.frame(data_to_plot,similarity_temp[["correct_encoding_to_correct_probe"]])
colnames(temp_plot_data)[9] <- "correct_encoding_probe"
encoding_to_probe_plots[["correct_encoding_to_correct_probe"]][["omnibus"]] <- ggplot(data =
temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=correct_encoding_probe))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Template encoding/probe vs Omnibus Span")+
theme_classic()
encoding_to_probe_plots[["correct_encoding_to_correct_probe"]][["BPRS"]] <- ggplot(data =
temp_plot_data,aes(x=BPRS_TOT.x,y=correct_encoding_probe))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Template encoding/probe vs BPRS")+
theme_classic()
encoding_to_probe_plots[["correct_encoding_to_correct_probe"]][["L3_Acc"]] <- ggplot(data =
temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=correct_encoding_probe))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Template encoding/probe vs L3 accuracy")+
theme_classic()
In these, graphs, if a correlation is black, it is below p < 0.05; if it is not, it is grey.
correlations = list()
for (i in c(7,11,13)){
colnames(similarity_temp[[i]]) <- unlist(similarity_temp[[1]][[1]])
temp_list <- list(r = data.frame(matrix(nrow=4,ncol=6)), p = data.frame(matrix(nrow=4,ncol=6)))
for (behav in seq.int(2,7)){
for (sim in seq.int(1,4)){
temp_corr <- cor.test(similarity_temp[[i]][,sim],data_to_plot[,behav])
temp_list[["r"]][sim,behav-1] <- temp_corr$estimate
temp_list[["p"]][sim,behav-1] <- temp_corr$p.value
}
}
colnames(temp_list[["r"]]) <- colnames(data_to_plot)[2:7]
rownames(temp_list[["r"]]) <- colnames(similarity_temp[[i]])
colnames(temp_list[["p"]]) <- colnames(data_to_plot)[2:7]
rownames(temp_list[["p"]]) <- colnames(similarity_temp[[i]])
correlations[[names(similarity_temp)[i]]] <- temp_list
}
temp <- data.frame(r=matrix(nrow=6,ncol=1),p=matrix(nrow=6,ncol=1))
rownames(temp) <- colnames(data_to_plot)[2:7]
for (behav in seq.int(2,7)){
temp_corr <- cor.test(similarity_temp[["correct_encoding_to_correct_probe"]],data_to_plot[,behav])
temp$r[behav-1] <- temp_corr$estimate
temp$p[behav-2] <- temp_corr$p.value
}
correlations[["correct_encoding_to_correct_probe"]] <- temp
Here, taking the correlation between multivariate representation at encoding (TR 5) and probe (TR 11) on each trial.
The only significant correlation we see here is between accuracy an high load correct trials.
correlations[["encoding_to_probe_avg"]][["r"]] %>%
mutate(
condition = row.names(.),
omnibus_span_no_DFR_MRI = cell_spec(omnibus_span_no_DFR_MRI, "html",
color =ifelse(correlations[["encoding_to_probe_avg"]][["p"]]$omnibus_span_no_DFR_MRI < 0.05, "black", "grey")),
XDFR_MRI_ACC_L3 = cell_spec(XDFR_MRI_ACC_L3, "html",
color =ifelse(correlations[["encoding_to_probe_avg"]][["p"]]$XDFR_MRI_ACC_L3 < 0.05, "black", "grey")),
BPRS_TOT.x = cell_spec(BPRS_TOT.x, "html",
color =ifelse(correlations[["encoding_to_probe_avg"]][["p"]]$BPRS_TOT.x < 0.05, "black", "grey"))
) %>%
select(condition,omnibus_span_no_DFR_MRI,XDFR_MRI_ACC_L3,BPRS_TOT.x) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "Individual Encoding to Individual Probe" = 3)))
| condition | omnibus_span_no_DFR_MRI | XDFR_MRI_ACC_L3 | BPRS_TOT.x |
|---|---|---|---|
| low load incorrect | -0.107958873031196 | 0.135077934527628 | -0.00271329088133755 |
| high load incorrect | -0.063901040567794 | 0.114113255823614 | -0.107076328962866 |
| low load correct | -0.148308000737086 | 0.100572367678891 | -0.0155467140238554 |
| high load correct | 0.0398996225962218 | 0.189082168572699 | -0.0404386802904891 |
This measure can’t distinguish between load but not accuracy (but it’s very very close).
encoding_to_probe_avg <- fisherz(similarity_temp[["encoding_to_probe_avg"]])
t.test(encoding_to_probe_avg[,4],encoding_to_probe_avg[,2],paired=TRUE)
##
## Paired t-test
##
## data: encoding_to_probe_avg[, 4] and encoding_to_probe_avg[, 2]
## t = 1.9688, df = 169, p-value = 0.05061
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -4.648305e-05 3.445211e-02
## sample estimates:
## mean of the differences
## 0.01720281
t.test(encoding_to_probe_avg[,4],encoding_to_probe_avg[,3],paired=TRUE)
##
## Paired t-test
##
## data: encoding_to_probe_avg[, 4] and encoding_to_probe_avg[, 3]
## t = 2.7727, df = 169, p-value = 0.006183
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.004980572 0.029603550
## sample estimates:
## mean of the differences
## 0.01729206
(encoding_to_probe_plots[["encoding_to_probe_avg"]][["omnibus"]][["high_load_correct"]] + encoding_to_probe_plots[["encoding_to_probe_avg"]][["omnibus"]][["high_load_incorrect"]]) /
(encoding_to_probe_plots[["encoding_to_probe_avg"]][["omnibus"]][["low_load_correct"]] +
encoding_to_probe_plots[["encoding_to_probe_avg"]][["omnibus"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs omnibus")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_probe_plots[["encoding_to_probe_avg"]][["L3_Acc"]][["high_load_correct"]] + encoding_to_probe_plots[["encoding_to_probe_avg"]][["L3_Acc"]][["high_load_incorrect"]]) /
(encoding_to_probe_plots[["encoding_to_probe_avg"]][["L3_Acc"]][["low_load_correct"]] +
encoding_to_probe_plots[["encoding_to_probe_avg"]][["L3_Acc"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs High Load Accuracy")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_probe_plots[["encoding_to_probe_avg"]][["BPRS"]][["high_load_correct"]] + encoding_to_probe_plots[["encoding_to_probe_avg"]][["BPRS"]][["high_load_incorrect"]]) /
(encoding_to_probe_plots[["encoding_to_probe_avg"]][["BPRS"]][["low_load_correct"]] +
encoding_to_probe_plots[["encoding_to_probe_avg"]][["BPRS"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs BPRS Total")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
This shows more relationships with accuracy - for all trial types except for low load incorrect.
correlations[["encoding_to_correct_probe_avg"]][["r"]] %>%
mutate(
condition = row.names(.),
omnibus_span_no_DFR_MRI = cell_spec(omnibus_span_no_DFR_MRI, "html",
color =ifelse(correlations[["encoding_to_correct_probe_avg"]][["p"]]$omnibus_span_no_DFR_MRI < 0.05, "black", "grey")),
XDFR_MRI_ACC_L3 = cell_spec(XDFR_MRI_ACC_L3, "html",
color =ifelse(correlations[["encoding_to_correct_probe_avg"]][["p"]]$XDFR_MRI_ACC_L3 < 0.05, "black", "grey")),
BPRS_TOT.x = cell_spec(BPRS_TOT.x, "html",
color =ifelse(correlations[["encoding_to_correct_probe_avg"]][["p"]]$BPRS_TOT.x < 0.05, "black", "grey"))
) %>%
select(condition,omnibus_span_no_DFR_MRI,XDFR_MRI_ACC_L3,BPRS_TOT.x) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "Individual Encoding to Template probe" = 3)))
| condition | omnibus_span_no_DFR_MRI | XDFR_MRI_ACC_L3 | BPRS_TOT.x |
|---|---|---|---|
| low load incorrect | -0.0233014821883 | 0.180143228787264 | -0.0596330938748161 |
| high load incorrect | 0.0295132539321085 | 0.191840755817728 | -0.0751303711927123 |
| low load correct | 0.0675959888532484 | 0.211685012506652 | -0.0865980630474684 |
| high load correct | 0.0809525819237514 | 0.188723796046559 | -0.0721345840999969 |
This measure distinguishes both load and accuracy.
encoding_to_correct_probe_avg <- fisherz(similarity_temp[["encoding_to_correct_probe_avg"]])
t.test(encoding_to_correct_probe_avg[,4],encoding_to_correct_probe_avg[,2],paired=TRUE)
##
## Paired t-test
##
## data: encoding_to_correct_probe_avg[, 4] and encoding_to_correct_probe_avg[, 2]
## t = 2.5252, df = 169, p-value = 0.01248
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.004154446 0.033919793
## sample estimates:
## mean of the differences
## 0.01903712
t.test(encoding_to_correct_probe_avg[,4],encoding_to_correct_probe_avg[,3],paired=TRUE)
##
## Paired t-test
##
## data: encoding_to_correct_probe_avg[, 4] and encoding_to_correct_probe_avg[, 3]
## t = 7.4381, df = 169, p-value = 4.91e-12
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.03863874 0.06655843
## sample estimates:
## mean of the differences
## 0.05259859
(encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["omnibus"]][["high_load_correct"]] + encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["omnibus"]][["high_load_incorrect"]]) /
(encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["omnibus"]][["low_load_correct"]] +
encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["omnibus"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs omnibus")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["L3_Acc"]][["high_load_correct"]] + encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["L3_Acc"]][["high_load_incorrect"]]) /
(encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["L3_Acc"]][["low_load_correct"]] +
encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["L3_Acc"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs High Load Accuracy")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["BPRS"]][["high_load_correct"]] + encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["BPRS"]][["high_load_incorrect"]]) /
(encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["BPRS"]][["low_load_correct"]] +
encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["BPRS"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs BPRS Total")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
Here, we’re back to only seeing a relationship with high load correct trials and accuracy.
correlations[["correct_encoding_to_probe_avg"]][["r"]] %>%
mutate(
condition = row.names(.),
omnibus_span_no_DFR_MRI = cell_spec(omnibus_span_no_DFR_MRI, "html",
color =ifelse(correlations[["correct_encoding_to_probe_avg"]][["p"]]$omnibus_span_no_DFR_MRI < 0.05, "black", "grey")),
XDFR_MRI_ACC_L3 = cell_spec(XDFR_MRI_ACC_L3, "html",
color =ifelse(correlations[["correct_encoding_to_probe_avg"]][["p"]]$XDFR_MRI_ACC_L3 < 0.05, "black", "grey")),
BPRS_TOT.x = cell_spec(BPRS_TOT.x, "html",
color =ifelse(correlations[["correct_encoding_to_probe_avg"]][["p"]]$BPRS_TOT.x < 0.05, "black", "grey"))
) %>%
select(condition,omnibus_span_no_DFR_MRI,XDFR_MRI_ACC_L3,BPRS_TOT.x) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "Template Encoding to probe" = 3)))
| condition | omnibus_span_no_DFR_MRI | XDFR_MRI_ACC_L3 | BPRS_TOT.x |
|---|---|---|---|
| low load incorrect | -0.0762342413747194 | 0.0290332248955645 | 0.0646720541050286 |
| high load incorrect | -0.0175160242282491 | 0.0684533729427094 | -0.0938632520843353 |
| low load correct | 0.0252641685058543 | 0.114685238680304 | -0.0164848173216513 |
| high load correct | 0.0144723653941234 | 0.236736488996031 | -0.0475945780836705 |
This measure only distinguishes load.
correct_encoding_to_probe_avg <- fisherz(similarity_temp[["correct_encoding_to_probe_avg"]])
t.test(correct_encoding_to_probe_avg[,4],correct_encoding_to_probe_avg[,2],paired=TRUE)
##
## Paired t-test
##
## data: correct_encoding_to_probe_avg[, 4] and correct_encoding_to_probe_avg[, 2]
## t = 1.3892, df = 169, p-value = 0.1666
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.004223028 0.024281031
## sample estimates:
## mean of the differences
## 0.010029
t.test(correct_encoding_to_probe_avg[,4],correct_encoding_to_probe_avg[,3],paired=TRUE)
##
## Paired t-test
##
## data: correct_encoding_to_probe_avg[, 4] and correct_encoding_to_probe_avg[, 3]
## t = -2.4008, df = 169, p-value = 0.01744
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.025937268 -0.002529988
## sample estimates:
## mean of the differences
## -0.01423363
(encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["omnibus"]][["high_load_correct"]] + encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["omnibus"]][["high_load_incorrect"]]) /
(encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["omnibus"]][["low_load_correct"]] +
encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["omnibus"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs omnibus")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["L3_Acc"]][["high_load_correct"]] + encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["L3_Acc"]][["high_load_incorrect"]]) /
(encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["L3_Acc"]][["low_load_correct"]] +
encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["L3_Acc"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs High Load Accuracy")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["BPRS"]][["high_load_correct"]] + encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["BPRS"]][["high_load_incorrect"]]) /
(encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["BPRS"]][["low_load_correct"]] +
encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["BPRS"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs BPRS Total")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
Again, we’re only seeing relationships between the correlationi and with accuracy.
correlations[["correct_encoding_to_correct_probe"]][c(2,3,5),] %>%
mutate(
condition = row.names(.),
r = cell_spec(r, "html",
color =ifelse(p < 0.05, "black", "grey"))
) %>%
select(condition,r) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above(c(" ", "Template Encoding to Template Probe" = 1))
| condition | r |
|---|---|
| omnibus_span_no_DFR_MRI | 0.0769216725339786 |
| XDFR_MRI_ACC_L3 | 0.144330609485881 |
| BPRS_TOT.x | -0.013932844644966 |
encoding_to_probe_plots[["correct_encoding_to_correct_probe"]][["omnibus"]]
encoding_to_probe_plots[["correct_encoding_to_correct_probe"]][["L3_Acc"]]
encoding_to_probe_plots[["correct_encoding_to_correct_probe"]][["BPRS"]]
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
Another theoretically interesting period is comparing delay to probe. Interestingly, it seems as though comparing representations here is only diagnostic of correct vs incorrect trials (not high vs low load), and also is related to overall accuracy.
delay_to_probe_plots <- list()
for (i in c(3,8,9)){
colnames(similarity_temp[[i]]) <- unlist(similarity_temp[[1]][[1]])
similarity_temp[[i]][similarity_temp[[i]]==0] <- NA
temp_plot_data <- cbind.data.frame(data_to_plot,similarity_temp[[i]])
delay_to_probe_plots[[names(similarity_temp)[i]]][["omnibus"]][["low_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`low load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Incorrect trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["omnibus"]][["high_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`high load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Incorrect trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["omnibus"]][["low_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`low load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Correct trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["omnibus"]][["high_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`high load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Correct trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["low_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`low load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Incorrect trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["high_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`high load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Incorrect trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["low_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`low load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Correct trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["high_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`high load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Correct trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["BPRS"]][["low_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`low load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Incorrect trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["BPRS"]][["high_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`high load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Incorrect trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["BPRS"]][["low_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`low load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Correct trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["BPRS"]][["high_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`high load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Correct trials")+
theme_classic()
}
temp_plot_data <- cbind.data.frame(data_to_plot,similarity_temp[["correct_delay_to_correct_probe"]])
colnames(temp_plot_data)[9] <- "correct_delay_probe"
delay_to_probe_plots[["correct_delay_to_correct_probe"]][["omnibus"]] <- ggplot(data =
temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=correct_delay_probe))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Template delay/probe vs Omnibus Span")+
theme_classic()
delay_to_probe_plots[["correct_delay_to_correct_probe"]][["BPRS"]] <- ggplot(data =
temp_plot_data,aes(x=BPRS_TOT.x,y=correct_delay_probe))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Template delay/probe vs BPRS")+
theme_classic()
delay_to_probe_plots[["correct_delay_to_correct_probe"]][["L3_Acc"]] <- ggplot(data =
temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=correct_delay_probe))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Template delay/probe vs L3 accuracy")+
theme_classic()
In these, graphs, if a correlation is black, it is below p < 0.05; if it is not, it is grey.
correlations = list()
for (i in c(3,8,9)){
colnames(similarity_temp[[i]]) <- unlist(similarity_temp[[1]][[1]])
temp_list <- list(r = data.frame(matrix(nrow=4,ncol=6)), p = data.frame(matrix(nrow=4,ncol=6)))
for (behav in seq.int(2,7)){
for (sim in seq.int(1,4)){
temp_corr <- cor.test(similarity_temp[[i]][,sim],data_to_plot[,behav])
temp_list[["r"]][sim,behav-1] <- temp_corr$estimate
temp_list[["p"]][sim,behav-1] <- temp_corr$p.value
}
}
colnames(temp_list[["r"]]) <- colnames(data_to_plot)[2:7]
rownames(temp_list[["r"]]) <- colnames(similarity_temp[[i]])
colnames(temp_list[["p"]]) <- colnames(data_to_plot)[2:7]
rownames(temp_list[["p"]]) <- colnames(similarity_temp[[i]])
correlations[[names(similarity_temp)[i]]] <- temp_list
}
temp <- data.frame(r=matrix(nrow=6,ncol=1),p=matrix(nrow=6,ncol=1))
rownames(temp) <- colnames(data_to_plot)[2:7]
for (behav in seq.int(2,7)){
temp_corr <- cor.test(similarity_temp[["correct_delay_to_correct_probe"]],data_to_plot[,behav])
temp$r[behav-1] <- temp_corr$estimate
temp$p[behav-2] <- temp_corr$p.value
}
correlations[["correct_delay_to_correct_probe"]] <- temp
Here, taking the correlation between multivariate representation at delay (TR 5) and probe (TR 11) on each trial.
Here, we’re seeing correlation between correct trials, regardless of load.
correlations[["delay_to_probe_avg"]][["r"]] %>%
mutate(
condition = row.names(.),
omnibus_span_no_DFR_MRI = cell_spec(omnibus_span_no_DFR_MRI, "html",
color =ifelse(correlations[["delay_to_probe_avg"]][["p"]]$omnibus_span_no_DFR_MRI < 0.05, "black", "grey")),
XDFR_MRI_ACC_L3 = cell_spec(XDFR_MRI_ACC_L3, "html",
color =ifelse(correlations[["delay_to_probe_avg"]][["p"]]$XDFR_MRI_ACC_L3 < 0.05, "black", "grey")),
BPRS_TOT.x = cell_spec(BPRS_TOT.x, "html",
color =ifelse(correlations[["delay_to_probe_avg"]][["p"]]$BPRS_TOT.x < 0.05, "black", "grey"))
) %>%
select(condition,omnibus_span_no_DFR_MRI,XDFR_MRI_ACC_L3,BPRS_TOT.x) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "Individual delay to Individual Probe" = 3)))
| condition | omnibus_span_no_DFR_MRI | XDFR_MRI_ACC_L3 | BPRS_TOT.x |
|---|---|---|---|
| low load incorrect | 0.0999474832000594 | 0.101840689845703 | -0.0214818240399329 |
| high load incorrect | 0.0285989286650679 | -0.0437534440804446 | -0.0699533108969534 |
| low load correct | -0.0617493074180673 | -0.171188164552925 | -0.0661245734978317 |
| high load correct | 0.0243411825232576 | -0.15875421838959 | -0.0783033920592679 |
This measure can’t distinguish between either load or accuracy.
delay_to_probe_avg <- fisherz(similarity_temp[["delay_to_probe_avg"]])
t.test(delay_to_probe_avg[,4],delay_to_probe_avg[,2],paired=TRUE)
##
## Paired t-test
##
## data: delay_to_probe_avg[, 4] and delay_to_probe_avg[, 2]
## t = -0.65136, df = 169, p-value = 0.5157
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.02482484 0.01250702
## sample estimates:
## mean of the differences
## -0.006158913
t.test(delay_to_probe_avg[,4],delay_to_probe_avg[,3],paired=TRUE)
##
## Paired t-test
##
## data: delay_to_probe_avg[, 4] and delay_to_probe_avg[, 3]
## t = 0.70064, df = 169, p-value = 0.4845
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.008108084 0.017029941
## sample estimates:
## mean of the differences
## 0.004460928
(delay_to_probe_plots[["delay_to_probe_avg"]][["omnibus"]][["high_load_correct"]] + delay_to_probe_plots[["delay_to_probe_avg"]][["omnibus"]][["high_load_incorrect"]]) /
(delay_to_probe_plots[["delay_to_probe_avg"]][["omnibus"]][["low_load_correct"]] +
delay_to_probe_plots[["delay_to_probe_avg"]][["omnibus"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs omnibus")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(delay_to_probe_plots[["delay_to_probe_avg"]][["L3_Acc"]][["high_load_correct"]] + delay_to_probe_plots[["delay_to_probe_avg"]][["L3_Acc"]][["high_load_incorrect"]]) /
(delay_to_probe_plots[["delay_to_probe_avg"]][["L3_Acc"]][["low_load_correct"]] +
delay_to_probe_plots[["delay_to_probe_avg"]][["L3_Acc"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs High Load Accuracy")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(delay_to_probe_plots[["delay_to_probe_avg"]][["BPRS"]][["high_load_correct"]] + delay_to_probe_plots[["delay_to_probe_avg"]][["BPRS"]][["high_load_incorrect"]]) /
(delay_to_probe_plots[["delay_to_probe_avg"]][["BPRS"]][["low_load_correct"]] +
delay_to_probe_plots[["delay_to_probe_avg"]][["BPRS"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs BPRS Total")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
Same as before - only seeing a relationship between similarity and correct trials, regardless of load.
correlations[["delay_to_correct_probe_avg"]][["r"]] %>%
mutate(
condition = row.names(.),
omnibus_span_no_DFR_MRI = cell_spec(omnibus_span_no_DFR_MRI, "html",
color =ifelse(correlations[["delay_to_correct_probe_avg"]][["p"]]$omnibus_span_no_DFR_MRI < 0.05, "black", "grey")),
XDFR_MRI_ACC_L3 = cell_spec(XDFR_MRI_ACC_L3, "html",
color =ifelse(correlations[["delay_to_correct_probe_avg"]][["p"]]$XDFR_MRI_ACC_L3 < 0.05, "black", "grey")),
BPRS_TOT.x = cell_spec(BPRS_TOT.x, "html",
color =ifelse(correlations[["delay_to_correct_probe_avg"]][["p"]]$BPRS_TOT.x < 0.05, "black", "grey"))
) %>%
select(condition,omnibus_span_no_DFR_MRI,XDFR_MRI_ACC_L3,BPRS_TOT.x) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "Individual delay to Template probe" = 3)))
| condition | omnibus_span_no_DFR_MRI | XDFR_MRI_ACC_L3 | BPRS_TOT.x |
|---|---|---|---|
| low load incorrect | 0.0177696165294611 | -0.0764157965740026 | -0.0647113955562519 |
| high load incorrect | 0.0239315782060482 | -0.108487267294542 | 0.0626072686324235 |
| low load correct | 0.0576380942776104 | -0.160013167275548 | -0.0673209132769745 |
| high load correct | 0.0699602946380556 | -0.231199129139215 | 0.0707735244701477 |
This measure distinguishes only between accuracy, but not load.
delay_to_correct_probe_avg <- fisherz(similarity_temp[["delay_to_correct_probe_avg"]])
t.test(delay_to_correct_probe_avg[,4],delay_to_correct_probe_avg[,2],paired=TRUE)
##
## Paired t-test
##
## data: delay_to_correct_probe_avg[, 4] and delay_to_correct_probe_avg[, 2]
## t = -2.7392, df = 169, p-value = 0.006819
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.03716834 -0.00603378
## sample estimates:
## mean of the differences
## -0.02160106
t.test(delay_to_correct_probe_avg[,4],delay_to_correct_probe_avg[,3],paired=TRUE)
##
## Paired t-test
##
## data: delay_to_correct_probe_avg[, 4] and delay_to_correct_probe_avg[, 3]
## t = 0.99115, df = 169, p-value = 0.323
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.005798725 0.017492879
## sample estimates:
## mean of the differences
## 0.005847077
(delay_to_probe_plots[["delay_to_correct_probe_avg"]][["omnibus"]][["high_load_correct"]] + delay_to_probe_plots[["delay_to_correct_probe_avg"]][["omnibus"]][["high_load_incorrect"]]) /
(delay_to_probe_plots[["delay_to_correct_probe_avg"]][["omnibus"]][["low_load_correct"]] +
delay_to_probe_plots[["delay_to_correct_probe_avg"]][["omnibus"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs omnibus")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(delay_to_probe_plots[["delay_to_correct_probe_avg"]][["L3_Acc"]][["high_load_correct"]] + delay_to_probe_plots[["delay_to_correct_probe_avg"]][["L3_Acc"]][["high_load_incorrect"]]) /
(delay_to_probe_plots[["delay_to_correct_probe_avg"]][["L3_Acc"]][["low_load_correct"]] +
delay_to_probe_plots[["delay_to_correct_probe_avg"]][["L3_Acc"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs High Load Accuracy")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(delay_to_probe_plots[["delay_to_correct_probe_avg"]][["BPRS"]][["high_load_correct"]] + delay_to_probe_plots[["delay_to_correct_probe_avg"]][["BPRS"]][["high_load_incorrect"]]) /
(delay_to_probe_plots[["delay_to_correct_probe_avg"]][["BPRS"]][["low_load_correct"]] +
delay_to_probe_plots[["delay_to_correct_probe_avg"]][["BPRS"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs BPRS Total")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
Same as before - only relationship with correct trials.
correlations[["correct_delay_to_probe_avg"]][["r"]] %>%
mutate(
condition = row.names(.),
omnibus_span_no_DFR_MRI = cell_spec(omnibus_span_no_DFR_MRI, "html",
color =ifelse(correlations[["correct_delay_to_probe_avg"]][["p"]]$omnibus_span_no_DFR_MRI < 0.05, "black", "grey")),
XDFR_MRI_ACC_L3 = cell_spec(XDFR_MRI_ACC_L3, "html",
color =ifelse(correlations[["correct_delay_to_probe_avg"]][["p"]]$XDFR_MRI_ACC_L3 < 0.05, "black", "grey")),
BPRS_TOT.x = cell_spec(BPRS_TOT.x, "html",
color =ifelse(correlations[["correct_delay_to_probe_avg"]][["p"]]$BPRS_TOT.x < 0.05, "black", "grey"))
) %>%
select(condition,omnibus_span_no_DFR_MRI,XDFR_MRI_ACC_L3,BPRS_TOT.x) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "Template delay to probe" = 3)))
| condition | omnibus_span_no_DFR_MRI | XDFR_MRI_ACC_L3 | BPRS_TOT.x |
|---|---|---|---|
| low load incorrect | 0.0372099118469689 | 0.00968201164217853 | 0.111065967359362 |
| high load incorrect | 0.0267511870180811 | -0.0382426602626256 | 0.0464791406312869 |
| low load correct | -0.0097945052491883 | -0.165867737691321 | 0.0533419833418061 |
| high load correct | 0.00768243154725069 | -0.15887892851924 | 0.0455135323513504 |
This measure only distinguishes accuracy.
correct_delay_to_probe_avg <- fisherz(similarity_temp[["correct_delay_to_probe_avg"]])
t.test(correct_delay_to_probe_avg[,4],correct_delay_to_probe_avg[,2],paired=TRUE)
##
## Paired t-test
##
## data: correct_delay_to_probe_avg[, 4] and correct_delay_to_probe_avg[, 2]
## t = -2.1722, df = 169, p-value = 0.03123
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.033911450 -0.001620536
## sample estimates:
## mean of the differences
## -0.01776599
t.test(correct_delay_to_probe_avg[,4],correct_delay_to_probe_avg[,3],paired=TRUE)
##
## Paired t-test
##
## data: correct_delay_to_probe_avg[, 4] and correct_delay_to_probe_avg[, 3]
## t = -1.4859, df = 169, p-value = 0.1392
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.020746555 0.002927505
## sample estimates:
## mean of the differences
## -0.008909525
(delay_to_probe_plots[["correct_delay_to_probe_avg"]][["omnibus"]][["high_load_correct"]] + delay_to_probe_plots[["correct_delay_to_probe_avg"]][["omnibus"]][["high_load_incorrect"]]) /
(delay_to_probe_plots[["correct_delay_to_probe_avg"]][["omnibus"]][["low_load_correct"]] +
delay_to_probe_plots[["correct_delay_to_probe_avg"]][["omnibus"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs omnibus")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(delay_to_probe_plots[["correct_delay_to_probe_avg"]][["L3_Acc"]][["high_load_correct"]] + delay_to_probe_plots[["correct_delay_to_probe_avg"]][["L3_Acc"]][["high_load_incorrect"]]) /
(delay_to_probe_plots[["correct_delay_to_probe_avg"]][["L3_Acc"]][["low_load_correct"]] +
delay_to_probe_plots[["correct_delay_to_probe_avg"]][["L3_Acc"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs High Load Accuracy")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(delay_to_probe_plots[["correct_delay_to_probe_avg"]][["BPRS"]][["high_load_correct"]] + delay_to_probe_plots[["correct_delay_to_probe_avg"]][["BPRS"]][["high_load_incorrect"]]) /
(delay_to_probe_plots[["correct_delay_to_probe_avg"]][["BPRS"]][["low_load_correct"]] +
delay_to_probe_plots[["correct_delay_to_probe_avg"]][["BPRS"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs BPRS Total")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
Again, we’re only seeing relationships between the correlation and with accuracy.
correlations[["correct_delay_to_correct_probe"]][c(2,3,5),] %>%
mutate(
condition = row.names(.),
r = cell_spec(r, "html",
color =ifelse(p < 0.05, "black", "grey"))
) %>%
select(condition,r) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above(c(" ", "Template delay to Template Probe" = 1))
| condition | r |
|---|---|
| omnibus_span_no_DFR_MRI | 0.0573977265501652 |
| XDFR_MRI_ACC_L3 | -0.12504600857427 |
| BPRS_TOT.x | -0.0544283394074548 |
delay_to_probe_plots[["correct_delay_to_correct_probe"]][["omnibus"]]
delay_to_probe_plots[["correct_delay_to_correct_probe"]][["L3_Acc"]]
delay_to_probe_plots[["correct_delay_to_correct_probe"]][["BPRS"]]
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
Now, we move on to regions associated with load effects.
similarity_temp <- read.mat('data/intertrial_similarity_DFR.mat')
for (i in seq.int(14,17)){
similarity_temp[[i]] <- data.frame(similarity_temp[[i]])
similarity_temp[[i]][similarity_temp[[i]]==0] <- NA
similarity_temp[[i]]$PTID <- constructs_fMRI$PTID
}
cond_avgs <- data.frame(matrix(nrow=4,ncol=14))
cond_avgs[1,] <- colMeans(similarity_temp[[14]][,1:14],na.rm=TRUE)
cond_avgs[2,] <- colMeans(similarity_temp[[15]][,1:14],na.rm=TRUE)
cond_avgs[3,] <- colMeans(similarity_temp[[16]][,1:14],na.rm=TRUE)
cond_avgs[4,] <- colMeans(similarity_temp[[17]][,1:14],na.rm=TRUE)
cond_avgs$group <- factor(names(similarity_temp)[14:17])
colnames(cond_avgs)[1:14] <- c(1:14)
se_avgs <- data.frame(matrix(nrow=4,ncol=14))
se_avgs[1,] <- sapply(similarity_temp[[14]][,1:14],se)
se_avgs[2,] <- sapply(similarity_temp[[15]][,1:14],se)
se_avgs[3,] <- sapply(similarity_temp[[16]][,1:14],se)
se_avgs[4,] <- sapply(similarity_temp[[17]][,1:14],se)
se_avgs$group <- factor(names(similarity_temp)[14:17])
colnames(se_avgs)[1:14] <- c(1:14)
cond_melt <- melt(cond_avgs,id_vars=c("group"))
## Using group as id variables
colnames(cond_melt) <- c("group", "TR", "similarity")
cond_melt$TR <- as.numeric(as.character(cond_melt$TR))
se_melt <- melt(se_avgs,id.vars="group")
colnames(se_melt) <- c("group", "TR", "se")
se_melt$TR <- as.numeric(as.character(se_melt$TR))
melt_avg_data <- merge(cond_melt,se_melt,by=c("group","TR"))
melt_avg_data$se_min <- melt_avg_data$similarity-melt_avg_data$se
melt_avg_data$se_max <- melt_avg_data$similarity+melt_avg_data$se
Although slightly larger errors and slightly lower correlations, seeing the same patterns as in the fusiform.
For this mask, we see the differences in accuracy at high load at TRs 5, 6, 8, 11, 13 and 14 and at TRs 1, 2, 4-11, 13 and 14 for low load. We can distinguish between load for correct trials at TRs 2-7, 11, 12, and for incorrect trials at TRs 1-2, 3-7, 10-14.
ggplot(data=melt_avg_data,aes(x=TR,y=similarity))+
geom_line(aes(color=group)) +
geom_ribbon(aes(ymin=se_min,ymax=se_max,fill=group),alpha=0.2)+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle("Intertrial similarity averaged over all subjects")+
theme_classic()
corrected_p_val <- 0.05/14
low_load_acc_test <- data.frame(matrix(nrow=3,ncol=14))
colnames(low_load_acc_test) <- paste("TR_",c(1:14))
rownames(low_load_acc_test) <- c("t","p","corrected_p")
high_load_acc_test <- data.frame(matrix(nrow=3,ncol=14))
colnames(high_load_acc_test) <- paste("TR_",c(1:14))
rownames(high_load_acc_test) <- c("t","p","corrected_sig")
for (time in seq.int(1,14)){
low_test <- t.test(similarity_temp[[16]][,time],similarity_temp[[17]][,time],paired=TRUE)
high_test <- t.test(similarity_temp[[14]][,time],similarity_temp[[15]][,time],paired=TRUE)
low_load_acc_test[1,time] <- low_test$statistic
low_load_acc_test[2,time] <- low_test$p.value
if (low_test$p.value < corrected_p_val){low_load_acc_test[3,time] <- 1}else{low_load_acc_test[3,time] <- 0}
high_load_acc_test[1,time] <- high_test$statistic
high_load_acc_test[2,time] <- high_test$p.value
if (high_test$p.value < corrected_p_val){high_load_acc_test[3,time] <- 1}else{high_load_acc_test[3,time] <- 0}
}
high_load_acc_test %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "t-test between correct and incorrect values at each time point, high load trials" = 14)))
| TR_ 1 | TR_ 2 | TR_ 3 | TR_ 4 | TR_ 5 | TR_ 6 | TR_ 7 | TR_ 8 | TR_ 9 | TR_ 10 | TR_ 11 | TR_ 12 | TR_ 13 | TR_ 14 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| t | 2.8892262 | 2.7912080 | 2.0493672 | 2.6898779 | 4.5071758 | 3.8798962 | 2.8603023 | 3.0914476 | 2.4548455 | 2.5887679 | 3.3097106 | 2.7078641 | 3.0584788 | 3.9483085 |
| p | 0.0043682 | 0.0058557 | 0.0419719 | 0.0078649 | 0.0000122 | 0.0001495 | 0.0047665 | 0.0023303 | 0.0151077 | 0.0104713 | 0.0011416 | 0.0074681 | 0.0025873 | 0.0001153 |
| corrected_sig | 0.0000000 | 0.0000000 | 0.0000000 | 0.0000000 | 1.0000000 | 1.0000000 | 0.0000000 | 1.0000000 | 0.0000000 | 0.0000000 | 1.0000000 | 0.0000000 | 1.0000000 | 1.0000000 |
low_load_acc_test %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "t-test between correct and incorrect values at each time point, low load trials" = 14)))
| TR_ 1 | TR_ 2 | TR_ 3 | TR_ 4 | TR_ 5 | TR_ 6 | TR_ 7 | TR_ 8 | TR_ 9 | TR_ 10 | TR_ 11 | TR_ 12 | TR_ 13 | TR_ 14 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| t | 4.5677474 | 3.9554849 | 2.4179737 | 3.5077300 | 4.2436080 | 3.9153612 | 3.6523970 | 4.351031 | 5.0480337 | 5.7048552 | 4.3024208 | 2.2214536 | 4.5065711 | 6.026869 |
| p | 0.0000127 | 0.0001343 | 0.0172198 | 0.0006516 | 0.0000455 | 0.0001555 | 0.0003967 | 0.000030 | 0.0000017 | 0.0000001 | 0.0000363 | 0.0283313 | 0.0000163 | 0.000000 |
| corrected_p | 1.0000000 | 1.0000000 | 0.0000000 | 1.0000000 | 1.0000000 | 1.0000000 | 1.0000000 | 1.000000 | 1.0000000 | 1.0000000 | 1.0000000 | 0.0000000 | 1.0000000 | 1.000000 |
correct_load_test <- data.frame(matrix(nrow=3,ncol=14))
colnames(correct_load_test) <- paste("TR_",c(1:14))
rownames(correct_load_test) <- c("t","p","corrected_p")
inccorrect_load_test <- data.frame(matrix(nrow=3,ncol=14))
colnames(inccorrect_load_test) <- paste("TR_",c(1:14))
rownames(inccorrect_load_test) <- c("t","p","corrected_p")
for (time in seq.int(1,14)){
correct_test <- t.test(similarity_temp[[14]][,time],similarity_temp[[16]][,time],paired=TRUE)
incorrect_test <- t.test(similarity_temp[[15]][,time],similarity_temp[[17]][,time],paired=TRUE)
correct_load_test[1,time] <- correct_test$statistic
correct_load_test[2,time] <- correct_test$p.value
if (correct_test$p.value < corrected_p_val){correct_load_test[3,time] <- 1}else{correct_load_test[3,time] <- 0}
inccorrect_load_test[1,time] <- incorrect_test$statistic
inccorrect_load_test[2,time] <- incorrect_test$p.value
if (incorrect_test$p.value < corrected_p_val){inccorrect_load_test[3,time] <- 1}else{inccorrect_load_test[3,time] <- 0}
}
correct_load_test %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "t-test between high and low loads at each time point, correct trials" = 14)))
| TR_ 1 | TR_ 2 | TR_ 3 | TR_ 4 | TR_ 5 | TR_ 6 | TR_ 7 | TR_ 8 | TR_ 9 | TR_ 10 | TR_ 11 | TR_ 12 | TR_ 13 | TR_ 14 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| t | 0.6180727 | 3.0064805 | 4.0083435 | 5.3460520 | 10.45517 | 10.62552 | 5.2862178 | 0.2916840 | -1.7133688 | 2.2720350 | 9.108687 | 8.469332 | 2.5713662 | 2.103212 |
| p | 0.5373595 | 0.0030462 | 0.0000916 | 0.0000003 | 0.00000 | 0.00000 | 0.0000004 | 0.7708859 | 0.0884784 | 0.0243449 | 0.000000 | 0.000000 | 0.0109911 | 0.036928 |
| corrected_p | 0.0000000 | 1.0000000 | 1.0000000 | 1.0000000 | 1.00000 | 1.00000 | 1.0000000 | 0.0000000 | 0.0000000 | 0.0000000 | 1.000000 | 1.000000 | 0.0000000 | 0.000000 |
inccorrect_load_test %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "t-test between high and low loads at each time point, incorrect trials" = 14)))
| TR_ 1 | TR_ 2 | TR_ 3 | TR_ 4 | TR_ 5 | TR_ 6 | TR_ 7 | TR_ 8 | TR_ 9 | TR_ 10 | TR_ 11 | TR_ 12 | TR_ 13 | TR_ 14 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| t | 2.987347 | 3.1658370 | 2.7808537 | 5.0031235 | 7.259286 | 8.209648 | 5.0380589 | 2.284257 | 2.5202473 | 5.5333814 | 6.030818 | 4.1353410 | 3.6298704 | 4.1161827 |
| p | 0.003458 | 0.0019922 | 0.0063622 | 0.0000021 | 0.000000 | 0.000000 | 0.0000018 | 0.024241 | 0.0131361 | 0.0000002 | 0.000000 | 0.0000688 | 0.0004289 | 0.0000739 |
| corrected_p | 1.000000 | 1.0000000 | 0.0000000 | 1.0000000 | 1.000000 | 1.000000 | 1.0000000 | 0.000000 | 0.0000000 | 1.0000000 | 1.000000 | 1.0000000 | 1.0000000 | 1.0000000 |
split_similarity <- list()
split_sim_avgs <- list()
for (i in seq.int(14,17)){
split_similarity[[names(similarity_temp)[i]]] <- split_into_groups(similarity_temp[[i]],WM_groups)
colnames(split_similarity[[i-13]][["all"]])[1:14] <- c(1:14)
for (level in seq.int(1,3)){
temp_data <- data.frame(mean=colMeans(split_similarity[[i-13]][[level]][1:14],na.rm=TRUE),se = sapply(split_similarity[[i-13]][[level]][1:14],se),
se_min = colMeans(split_similarity[[i-13]][[level]][1:14],na.rm=TRUE) - sapply(split_similarity[[i-13]][[level]][1:14],se),
se_max = colMeans(split_similarity[[i-13]][[level]][1:14],na.rm=TRUE) + sapply(split_similarity[[i-13]][[level]][1:14],se))
split_sim_avgs[[names(split_similarity)[i-13]]][[names(split_similarity[[i-13]])[level]]] <- data.frame((temp_data))
split_sim_avgs[[i-13]][[level]]$group <- rep(names(split_similarity[[i-13]])[level],14)
split_sim_avgs[[i-13]][[level]]$TR <- seq.int(1,14)
}
split_sim_avgs[[i-13]][["all"]] <- rbind(split_sim_avgs[[i-13]][["high"]],split_sim_avgs[[i-13]][["med"]],split_sim_avgs[[i-13]][["low"]])
split_sim_avgs[[i-13]][["all"]]$group <- factor(split_sim_avgs[[i-13]][["all"]]$group, levels=c("high","med","low"))
}
Same story as in fusiform, except we also a little more group distinction in the encoding period in the high load correct trials, where high and medium capacity subjects show increased similarity vs low capacity subjects.
sim_plots <- list()
for (i in seq.int(1,4)){
sim_plots[[i]] <- ggplot(data = split_sim_avgs[[i]][["all"]])+
geom_line(aes(x=TR,y=mean,color=group))+
geom_ribbon(aes(x=TR,ymin=se_min,ymax=se_max,fill=group),alpha=0.2)+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(split_sim_avgs)[i])+
ylab("Mean Similarity")+
theme_classic()
}
(sim_plots[[1]] + sim_plots[[2]]) / (sim_plots[[3]] + sim_plots[[4]])+
plot_layout(guides = "collect")+
plot_annotation(title="Inter-trial Similarity")
data_to_plot <- merge(constructs_fMRI,p200_data,by="PTID")
data_to_plot <- merge(data_to_plot,p200_clinical_zscores,by="PTID")
data_to_plot <- data_to_plot[,c(1,6,7,13,14,40,41)]
data_to_plot$ACC_LE <- data_to_plot$XDFR_MRI_ACC_L3 - data_to_plot$XDFR_MRI_ACC_L1
corr_to_behav_plots <- list()
for (i in seq.int(14,17)){
measure_by_time <- data.frame(matrix(nrow=4,ncol=14))
for (measure in seq.int(3,6)){
for (TR in seq.int(1,14)){
measure_by_time[measure-2,TR] <- cor(data_to_plot[,measure],similarity_temp[[i]][,TR],use="pairwise.complete.obs")
}
}
measure_by_time <- data.frame(t(measure_by_time))
measure_by_time$TR <- seq.int(1,14)
colnames(measure_by_time)[1:4] <- colnames(data_to_plot)[3:6]
melted_measure_by_time <- melt(measure_by_time,id.vars="TR")
corr_to_behav_plots[[names(similarity_temp)[i]]] <- ggplot(data=melted_measure_by_time,aes(x=TR,y=value))+
geom_line(aes(color=variable))+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
theme_classic()
}
Here, we’re actually seeing something a little different and interesting - stronger correlations overall, and omnibus span is most related during the encoding and probe periods, but not delay periods. There also isn’t any correlation with the clinical measures.
(corr_to_behav_plots[[1]] + corr_to_behav_plots[[2]]) / (corr_to_behav_plots[[3]] + corr_to_behav_plots[[4]])+
plot_layout(guides="collect")+
plot_annotation(title = "Correlation between inter-trial similarity and behavioral measure")
scatter_plots_delay <- list()
scatter_plots_cue <- list()
scatter_plots_probe <- list()
for (i in seq.int(14,17)){
temp_plot_data <- merge(data_to_plot,similarity_temp[[i]],by="PTID")
scatter_plots_delay[[names(similarity_temp)[i]]][["omnibus"]] <- ggplot(data=temp_plot_data)+
geom_point(aes(x=omnibus_span_no_DFR_MRI,y=X8))+
stat_smooth(aes(x=omnibus_span_no_DFR_MRI,y=X8),method="lm")+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
ylab("Inter-trial similarity")+
theme_classic()
scatter_plots_delay[[names(similarity_temp)[i]]][["BPRS"]] <- ggplot(data=temp_plot_data)+
geom_point(aes(x=BPRS_TOT.x,y=X8))+
stat_smooth(aes(x=BPRS_TOT.x,y=X8),method="lm")+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
ylab("Inter-trial similarity")+
theme_classic()
scatter_plots_delay[[names(similarity_temp)[i]]][["L3_acc"]] <- ggplot(data=temp_plot_data)+
geom_point(aes(x=XDFR_MRI_ACC_L3,y=X8))+
stat_smooth(aes(x=XDFR_MRI_ACC_L3,y=X8),method="lm")+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
ylab("Inter-trial similarity")+
theme_classic()
scatter_plots_cue[[names(similarity_temp)[i]]][["omnibus"]] <- ggplot(data=temp_plot_data)+
geom_point(aes(x=omnibus_span_no_DFR_MRI,y=X6))+
stat_smooth(aes(x=omnibus_span_no_DFR_MRI,y=X6),method="lm")+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
ylab("Inter-trial similarity")+
theme_classic()
scatter_plots_cue[[names(similarity_temp)[i]]][["BPRS"]] <- ggplot(data=temp_plot_data)+
geom_point(aes(x=BPRS_TOT.x,y=X6))+
stat_smooth(aes(x=BPRS_TOT.x,y=X6),method="lm")+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
ylab("Inter-trial similarity")+
theme_classic()
scatter_plots_cue[[names(similarity_temp)[i]]][["L3_acc"]] <- ggplot(data=temp_plot_data)+
geom_point(aes(x=XDFR_MRI_ACC_L3,y=X6))+
stat_smooth(aes(x=XDFR_MRI_ACC_L3,y=X6),method="lm")+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
ylab("Inter-trial similarity")+
theme_classic()
scatter_plots_probe[[names(similarity_temp)[i]]][["omnibus"]] <- ggplot(data=temp_plot_data)+
geom_point(aes(x=omnibus_span_no_DFR_MRI,y=X11))+
stat_smooth(aes(x=omnibus_span_no_DFR_MRI,y=X11),method="lm")+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
ylab("Inter-trial similarity")+
theme_classic()
scatter_plots_probe[[names(similarity_temp)[i]]][["BPRS"]] <- ggplot(data=temp_plot_data)+
geom_point(aes(x=BPRS_TOT.x,y=X11))+
stat_smooth(aes(x=BPRS_TOT.x,y=X11),method="lm")+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
ylab("Inter-trial similarity")+
theme_classic()
scatter_plots_probe[[names(similarity_temp)[i]]][["L3_acc"]] <- ggplot(data=temp_plot_data)+
geom_point(aes(x=XDFR_MRI_ACC_L3,y=X11))+
stat_smooth(aes(x=XDFR_MRI_ACC_L3,y=X11),method="lm")+
scale_x_continuous(breaks = c(1:14),labels=c(1:14))+
ggtitle(names(similarity_temp)[i])+
ylab("Inter-trial similarity")+
theme_classic()
}
Just like before, not seeing any non-linear relationships, though the relationship between accuracy and similarity is statistically significant for all conditions except for low load incorrect trials.
(scatter_plots_cue[[1]][["omnibus"]] + scatter_plots_cue[[2]][["omnibus"]]) /
(scatter_plots_cue[[3]][["omnibus"]] + scatter_plots_cue[[4]][["omnibus"]])+
plot_layout(guides="collect")+
plot_annotation(title = "Omnibus span vs inter-trial similiarity - encoding")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
cor.test(similarity_temp[["high_correct_avg"]]$X5,data_to_plot$omnibus_span_no_DFR_MRI)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_correct_avg"]]$X5 and data_to_plot$omnibus_span_no_DFR_MRI
## t = 1.5617, df = 168, p-value = 0.1202
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.03146036 0.26535694
## sample estimates:
## cor
## 0.1196203
cor.test(similarity_temp[["high_incorrect_avg"]]$X5,data_to_plot$omnibus_span_no_DFR_MRI)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_incorrect_avg"]]$X5 and data_to_plot$omnibus_span_no_DFR_MRI
## t = 1.2925, df = 168, p-value = 0.198
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.05206472 0.24606635
## sample estimates:
## cor
## 0.09922712
(scatter_plots_cue[[1]][["BPRS"]] + scatter_plots_cue[[2]][["BPRS"]]) /
(scatter_plots_cue[[3]][["BPRS"]] + scatter_plots_cue[[4]][["BPRS"]])+
plot_layout(guides="collect")+
plot_annotation(title = "BPRS span vs inter-trial similiarity - encoding")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
cor.test(similarity_temp[["high_correct_avg"]]$X5,data_to_plot$BPRS_TOT.x)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_correct_avg"]]$X5 and data_to_plot$BPRS_TOT.x
## t = -0.4146, df = 167, p-value = 0.679
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.1821446 0.1194721
## sample estimates:
## cor
## -0.03206624
cor.test(similarity_temp[["high_incorrect_avg"]]$X5,data_to_plot$BPRS_TOT.x)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_incorrect_avg"]]$X5 and data_to_plot$BPRS_TOT.x
## t = 0.45649, df = 167, p-value = 0.6486
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.1162772 0.1852752
## sample estimates:
## cor
## 0.03530251
cor.test(similarity_temp[["low_incorrect_avg"]]$X5,data_to_plot$BPRS_TOT.x)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["low_incorrect_avg"]]$X5 and data_to_plot$BPRS_TOT.x
## t = -0.47578, df = 110, p-value = 0.6352
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.2289480 0.1414276
## sample estimates:
## cor
## -0.04531738
(scatter_plots_cue[[1]][["L3_acc"]] + scatter_plots_cue[[2]][["L3_acc"]]) /
(scatter_plots_cue[[3]][["L3_acc"]] + scatter_plots_cue[[4]][["L3_acc"]])+
plot_layout(guides="collect")+
plot_annotation(title = "L3_acc span vs inter-trial similiarity - encoding")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
cor.test(similarity_temp[["high_correct_avg"]]$X5,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_correct_avg"]]$X5 and data_to_plot$XDFR_MRI_ACC_L3
## t = 4.5927, df = 168, p-value = 8.545e-06
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.1931873 0.4613139
## sample estimates:
## cor
## 0.33399
cor.test(similarity_temp[["high_incorrect_avg"]]$X5,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_incorrect_avg"]]$X5 and data_to_plot$XDFR_MRI_ACC_L3
## t = 2.7355, df = 168, p-value = 0.006897
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.05778316 0.34625363
## sample estimates:
## cor
## 0.2065014
cor.test(similarity_temp[["low_correct_avg"]]$X5,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["low_correct_avg"]]$X5 and data_to_plot$XDFR_MRI_ACC_L3
## t = 2.2893, df = 168, p-value = 0.02331
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.02404603 0.31616805
## sample estimates:
## cor
## 0.1739308
cor.test(similarity_temp[["low_incorrect_avg"]]$X5,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["low_incorrect_avg"]]$X5 and data_to_plot$XDFR_MRI_ACC_L3
## t = 0.57367, df = 111, p-value = 0.5674
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.1316830 0.2367217
## sample estimates:
## cor
## 0.05436942
Same - no non-linear relationships. Here, only the relationship with high load accuracy is related for the high load correct trials and low load incorrect trials.
(scatter_plots_delay[[1]][["omnibus"]] + scatter_plots_delay[[2]][["omnibus"]]) /
(scatter_plots_delay[[3]][["omnibus"]] + scatter_plots_delay[[4]][["omnibus"]])+
plot_layout(guides="collect")+
plot_annotation(title = "Omnibus span vs inter-trial similiarity - delay")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
cor.test(similarity_temp[["low_correct_avg"]]$X8,data_to_plot$omnibus_span_no_DFR_MRI)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["low_correct_avg"]]$X8 and data_to_plot$omnibus_span_no_DFR_MRI
## t = 1.1661, df = 168, p-value = 0.2452
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.06174178 0.23692407
## sample estimates:
## cor
## 0.08960518
(scatter_plots_delay[[1]][["BPRS"]] + scatter_plots_delay[[2]][["BPRS"]]) /
(scatter_plots_delay[[3]][["BPRS"]] + scatter_plots_delay[[4]][["BPRS"]])+
plot_layout(guides="collect")+
plot_annotation(title = "BPRS vs inter-trial similiarity - delay")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
(scatter_plots_delay[[1]][["L3_acc"]] + scatter_plots_delay[[2]][["L3_acc"]]) /
(scatter_plots_delay[[3]][["L3_acc"]] + scatter_plots_delay[[4]][["L3_acc"]])+
plot_layout(guides="collect")+
plot_annotation(title = "L3_acc vs inter-trial similiarity - delay")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
cor.test(similarity_temp[["high_correct_avg"]]$X8,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_correct_avg"]]$X8 and data_to_plot$XDFR_MRI_ACC_L3
## t = 3.4373, df = 168, p-value = 0.0007407
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.1100649 0.3917331
## sample estimates:
## cor
## 0.2563326
cor.test(similarity_temp[["high_incorrect_avg"]]$X8,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_incorrect_avg"]]$X8 and data_to_plot$XDFR_MRI_ACC_L3
## t = 1.7106, df = 168, p-value = 0.089
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.02006702 0.27592275
## sample estimates:
## cor
## 0.1308424
cor.test(similarity_temp[["low_correct_avg"]]$X8,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["low_correct_avg"]]$X8 and data_to_plot$XDFR_MRI_ACC_L3
## t = 1.771, df = 168, p-value = 0.07838
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.01545278 0.28018159
## sample estimates:
## cor
## 0.1353763
cor.test(similarity_temp[["low_incorrect_avg"]]$X8,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["low_incorrect_avg"]]$X8 and data_to_plot$XDFR_MRI_ACC_L3
## t = -0.5045, df = 111, p-value = 0.6149
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.2305234 0.1381198
## sample estimates:
## cor
## -0.04783045
There’s a correlation between accuracy and high load trials (regardless of accuracy) and omnibus span and high load trials (regardless of accuracy).
(scatter_plots_probe[[1]][["omnibus"]] + scatter_plots_probe[[2]][["omnibus"]]) /
(scatter_plots_probe[[3]][["omnibus"]] + scatter_plots_probe[[4]][["omnibus"]])+
plot_layout(guides="collect")+
plot_annotation(title = "Omnibus span vs inter-trial similiarity - probe")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
cor.test(similarity_temp[["high_correct_avg"]]$X11,data_to_plot$omnibus_span_no_DFR_MRI)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_correct_avg"]]$X11 and data_to_plot$omnibus_span_no_DFR_MRI
## t = 2.3836, df = 168, p-value = 0.01826
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.03119825 0.32259561
## sample estimates:
## cor
## 0.1808632
cor.test(similarity_temp[["high_incorrect_avg"]]$X11,data_to_plot$omnibus_span_no_DFR_MRI)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_incorrect_avg"]]$X11 and data_to_plot$omnibus_span_no_DFR_MRI
## t = 2.2925, df = 168, p-value = 0.02312
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.02428737 0.31638537
## sample estimates:
## cor
## 0.1741649
(scatter_plots_probe[[1]][["BPRS"]] + scatter_plots_probe[[2]][["BPRS"]]) /
(scatter_plots_probe[[3]][["BPRS"]] + scatter_plots_probe[[4]][["BPRS"]])+
plot_layout(guides="collect")+
plot_annotation(title = "BPRS vs inter-trial similiarity - probe")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
cor.test(similarity_temp[["low_correct_avg"]]$X11,data_to_plot$BPRS_TOT.x)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["low_correct_avg"]]$X11 and data_to_plot$BPRS_TOT.x
## t = 1.2655, df = 167, p-value = 0.2075
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.05429678 0.24482013
## sample estimates:
## cor
## 0.09746211
(scatter_plots_probe[[1]][["L3_acc"]] + scatter_plots_probe[[2]][["L3_acc"]]) /
(scatter_plots_probe[[3]][["L3_acc"]] + scatter_plots_probe[[4]][["L3_acc"]])+
plot_layout(guides="collect")+
plot_annotation(title = "L3_acc vs inter-trial similiarity - probe")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
cor.test(similarity_temp[["high_correct_avg"]]$X11,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_correct_avg"]]$X11 and data_to_plot$XDFR_MRI_ACC_L3
## t = 4.6826, df = 168, p-value = 5.814e-06
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.1994625 0.4664362
## sample estimates:
## cor
## 0.339776
cor.test(similarity_temp[["high_incorrect_avg"]]$X11,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["high_incorrect_avg"]]$X11 and data_to_plot$XDFR_MRI_ACC_L3
## t = 2.4804, df = 168, p-value = 0.01411
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## 0.03852978 0.32915653
## sample estimates:
## cor
## 0.187954
cor.test(similarity_temp[["low_correct_avg"]]$X11,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["low_correct_avg"]]$X11 and data_to_plot$XDFR_MRI_ACC_L3
## t = -0.2819, df = 168, p-value = 0.7784
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.1716965 0.1291929
## sample estimates:
## cor
## -0.02174414
cor.test(similarity_temp[["low_incorrect_avg"]]$X11,data_to_plot$XDFR_MRI_ACC_L3)
##
## Pearson's product-moment correlation
##
## data: similarity_temp[["low_incorrect_avg"]]$X11 and data_to_plot$XDFR_MRI_ACC_L3
## t = 0.81138, df = 111, p-value = 0.4189
## alternative hypothesis: true correlation is not equal to 0
## 95 percent confidence interval:
## -0.1094975 0.2578577
## sample estimates:
## cor
## 0.07678552
Same analysis pattern as before we’re going to now try to correlate across TRs.
encoding_to_delay_plots <- list()
for (i in c(6,10,12)){
colnames(similarity_temp[[i]]) <- unlist(similarity_temp[[1]][[1]])
similarity_temp[[i]][similarity_temp[[i]] == 0] <- NA
temp_plot_data <- cbind.data.frame(data_to_plot,similarity_temp[[i]])
encoding_to_delay_plots[[names(similarity_temp)[i]]][["omnibus"]][["low_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`low load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Incorrect trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["omnibus"]][["high_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`high load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Incorrect trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["omnibus"]][["low_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`low load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Correct trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["omnibus"]][["high_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`high load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Correct trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["low_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`low load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Incorrect trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["high_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`high load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Incorrect trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["low_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`low load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Correct trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["high_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`high load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Correct trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["BPRS"]][["low_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`low load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Incorrect trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["BPRS"]][["high_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`high load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Incorrect trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["BPRS"]][["low_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`low load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Correct trials")+
theme_classic()
encoding_to_delay_plots[[names(similarity_temp)[i]]][["BPRS"]][["high_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`high load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Correct trials")+
theme_classic()
}
temp_plot_data <- cbind.data.frame(data_to_plot,similarity_temp[["correct_encoding_to_correct_delay"]])
colnames(temp_plot_data)[9] <- "correct_encoding_delay"
encoding_to_delay_plots[["correct_encoding_to_correct_delay"]][["omnibus"]] <- ggplot(data =
temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=correct_encoding_delay))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Template encoding/delay vs Omnibus Span")+
theme_classic()
encoding_to_delay_plots[["correct_encoding_to_correct_delay"]][["BPRS"]] <- ggplot(data =
temp_plot_data,aes(x=BPRS_TOT.x,y=correct_encoding_delay))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Template encoding/delay vs BPRS")+
theme_classic()
encoding_to_delay_plots[["correct_encoding_to_correct_delay"]][["L3_Acc"]] <- ggplot(data =
temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=correct_encoding_delay))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Template encoding/delay vs L3 accuracy")+
theme_classic()
Just like before, black represents a statistically significant value (p < 0.05). Scatter plots will be shown below.
correlations = list()
for (i in c(6,10,12)){
colnames(similarity_temp[[i]]) <- unlist(similarity_temp[[1]][[1]])
temp_list <- list(r = data.frame(matrix(nrow=4,ncol=6)), p = data.frame(matrix(nrow=4,ncol=6)))
for (behav in seq.int(2,7)){
for (sim in seq.int(1,4)){
temp_corr <- cor.test(similarity_temp[[i]][,sim],data_to_plot[,behav])
temp_list[["r"]][sim,behav-1] <- temp_corr$estimate
temp_list[["p"]][sim,behav-1] <- temp_corr$p.value
}
}
colnames(temp_list[["r"]]) <- colnames(data_to_plot)[2:7]
rownames(temp_list[["r"]]) <- colnames(similarity_temp[[i]])
colnames(temp_list[["p"]]) <- colnames(data_to_plot)[2:7]
rownames(temp_list[["p"]]) <- colnames(similarity_temp[[i]])
correlations[[names(similarity_temp)[i]]] <- temp_list
}
temp <- data.frame(r=matrix(nrow=6,ncol=1),p=matrix(nrow=6,ncol=1))
rownames(temp) <- colnames(data_to_plot)[2:7]
for (behav in seq.int(2,7)){
temp_corr <- cor.test(similarity_temp[["correct_encoding_to_correct_delay"]],data_to_plot[,behav])
temp$r[behav-1] <- temp_corr$estimate
temp$p[behav-2] <- temp_corr$p.value
}
correlations[["correct_encoding_to_correct_delay"]] <- temp
There is a statistically significant negative correlation with high load accuracy and correct trials (regardless of load), and a trend towards a negative correlation between omnibus span and similiarity on correct low load trials. There is also a significant relationship between omnibus span and similarity on low load incorrect trials, but just as before, this needs to be taken with a grain of caution. There are no significant correlations with clinical measures.
correlations[["encoding_to_delay_avg"]][["r"]] %>%
mutate(
condition = row.names(.),
omnibus_span_no_DFR_MRI = cell_spec(omnibus_span_no_DFR_MRI, "html",
color =ifelse(correlations[["encoding_to_delay_avg"]][["p"]]$omnibus_span_no_DFR_MRI < 0.05, "black", "grey")),
XDFR_MRI_ACC_L3 = cell_spec(XDFR_MRI_ACC_L3, "html",
color =ifelse(correlations[["encoding_to_delay_avg"]][["p"]]$XDFR_MRI_ACC_L3 < 0.05, "black", "grey")),
BPRS_TOT.x = cell_spec(BPRS_TOT.x, "html",
color =ifelse(correlations[["encoding_to_delay_avg"]][["p"]]$BPRS_TOT.x < 0.05, "black", "grey"))
) %>%
select(condition,omnibus_span_no_DFR_MRI,XDFR_MRI_ACC_L3,BPRS_TOT.x) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "Individual Encoding to Individual Delay" = 3)))
| condition | omnibus_span_no_DFR_MRI | XDFR_MRI_ACC_L3 | BPRS_TOT.x |
|---|---|---|---|
| low load incorrect | -0.131003807047542 | 0.00201704162365899 | -0.101112090700478 |
| high load incorrect | 0.0756928888526784 | -0.0751276248534951 | 0.0186020785742466 |
| low load correct | -0.14459654921975 | -0.183072274547306 | -0.0331690589880607 |
| high load correct | -0.00177587951246186 | -0.18548145471416 | -0.0176013091545108 |
This measure can distiniguish load but not accuracy.
encoding_to_delay_avg <- fisherz(similarity_temp[["encoding_to_delay_avg"]])
t.test(encoding_to_delay_avg[,4],encoding_to_delay_avg[,2],paired=TRUE)
##
## Paired t-test
##
## data: encoding_to_delay_avg[, 4] and encoding_to_delay_avg[, 2]
## t = -0.23475, df = 169, p-value = 0.8147
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.01993260 0.01569585
## sample estimates:
## mean of the differences
## -0.002118377
t.test(encoding_to_delay_avg[,4],encoding_to_delay_avg[,3],paired=TRUE)
##
## Paired t-test
##
## data: encoding_to_delay_avg[, 4] and encoding_to_delay_avg[, 3]
## t = 11.73, df = 169, p-value < 2.2e-16
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.08900045 0.12501985
## sample estimates:
## mean of the differences
## 0.1070102
(encoding_to_delay_plots[["encoding_to_delay_avg"]][["omnibus"]][["high_load_correct"]] + encoding_to_delay_plots[["encoding_to_delay_avg"]][["omnibus"]][["high_load_incorrect"]]) /
(encoding_to_delay_plots[["encoding_to_delay_avg"]][["omnibus"]][["low_load_correct"]] +
encoding_to_delay_plots[["encoding_to_delay_avg"]][["omnibus"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs omnibus")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_delay_plots[["encoding_to_delay_avg"]][["L3_Acc"]][["high_load_correct"]] + encoding_to_delay_plots[["encoding_to_delay_avg"]][["L3_Acc"]][["high_load_incorrect"]]) /
(encoding_to_delay_plots[["encoding_to_delay_avg"]][["L3_Acc"]][["low_load_correct"]] +
encoding_to_delay_plots[["encoding_to_delay_avg"]][["L3_Acc"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs High Load Accuracy")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_delay_plots[["encoding_to_delay_avg"]][["BPRS"]][["high_load_correct"]] + encoding_to_delay_plots[["encoding_to_delay_avg"]][["BPRS"]][["high_load_incorrect"]]) /
(encoding_to_delay_plots[["encoding_to_delay_avg"]][["BPRS"]][["low_load_correct"]] +
encoding_to_delay_plots[["encoding_to_delay_avg"]][["BPRS"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs BPRS Total")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
Here, we start to see a different pattern from the fusiform. There is a significant negative correlation between similarity on correct high load trials and omnibus span and also with high load accuracy. There is also no significant relationship with clinical measures.
This is slightly different from the fusiform, where we saw relationships with clinical measures, and where we didn’t see any significant relationships with omnibus span.
correlations[["encoding_to_correct_delay_avg"]][["r"]] %>%
mutate(
condition = row.names(.),
omnibus_span_no_DFR_MRI = cell_spec(omnibus_span_no_DFR_MRI, "html",
color =ifelse(correlations[["encoding_to_correct_delay_avg"]][["p"]]$omnibus_span_no_DFR_MRI < 0.05, "black", "grey")),
XDFR_MRI_ACC_L3 = cell_spec(XDFR_MRI_ACC_L3, "html",
color =ifelse(correlations[["encoding_to_correct_delay_avg"]][["p"]]$XDFR_MRI_ACC_L3 < 0.05, "black", "grey")),
BPRS_TOT.x = cell_spec(BPRS_TOT.x, "html",
color =ifelse(correlations[["encoding_to_correct_delay_avg"]][["p"]]$BPRS_TOT.x < 0.05, "black", "grey"))
) %>%
select(condition,omnibus_span_no_DFR_MRI,XDFR_MRI_ACC_L3,BPRS_TOT.x) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "Individual Encoding to Template Delay" = 3)))
| condition | omnibus_span_no_DFR_MRI | XDFR_MRI_ACC_L3 | BPRS_TOT.x |
|---|---|---|---|
| low load incorrect | -0.00442964778690035 | -0.00610835379911364 | 0.0203239795220361 |
| high load incorrect | -0.036929537132356 | -0.0403694255449535 | -0.0558826589752957 |
| low load correct | 0.0401578811970275 | -0.0679254364734098 | -0.0279882685192926 |
| high load correct | -0.184028837558584 | -0.158219670027366 | -0.00220693705890965 |
This measure can distinguish both load and accuracy.
encoding_to_correct_delay_avg <- fisherz(similarity_temp[["encoding_to_correct_delay_avg"]])
t.test(encoding_to_correct_delay_avg[,4],encoding_to_correct_delay_avg[,2],paired=TRUE)
##
## Paired t-test
##
## data: encoding_to_correct_delay_avg[, 4] and encoding_to_correct_delay_avg[, 2]
## t = -2.5283, df = 169, p-value = 0.01238
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.034415229 -0.004236351
## sample estimates:
## mean of the differences
## -0.01932579
t.test(encoding_to_correct_delay_avg[,4],encoding_to_correct_delay_avg[,3],paired=TRUE)
##
## Paired t-test
##
## data: encoding_to_correct_delay_avg[, 4] and encoding_to_correct_delay_avg[, 3]
## t = -7.7977, df = 169, p-value = 6.18e-13
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.08973715 -0.05347964
## sample estimates:
## mean of the differences
## -0.0716084
(encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["omnibus"]][["high_load_correct"]] + encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["omnibus"]][["high_load_incorrect"]]) /
(encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["omnibus"]][["low_load_correct"]] +
encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["omnibus"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs omnibus")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["L3_Acc"]][["high_load_correct"]] + encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["L3_Acc"]][["high_load_incorrect"]]) /
(encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["L3_Acc"]][["low_load_correct"]] +
encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["L3_Acc"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs High Load Accuracy")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["BPRS"]][["high_load_correct"]] + encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["BPRS"]][["high_load_incorrect"]]) /
(encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["BPRS"]][["low_load_correct"]] +
encoding_to_delay_plots[["encoding_to_correct_delay_avg"]][["BPRS"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs BPRS Total")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
Here, we only see a significant negative correlation between similarity at correct low load trials and high load accuracy. This is starkly different from the fusiform similarity, where we saw relationships between similarity at correct high load trials and omnibus span and BPRS. Also interesting here is that all trial types showed correlation between fusiform similarity and high load accuracy.
correlations[["correct_encoding_to_delay_avg"]][["r"]] %>%
mutate(
condition = row.names(.),
omnibus_span_no_DFR_MRI = cell_spec(omnibus_span_no_DFR_MRI, "html",
color =ifelse(correlations[["correct_encoding_to_delay_avg"]][["p"]]$omnibus_span_no_DFR_MRI < 0.05, "black", "grey")),
XDFR_MRI_ACC_L3 = cell_spec(XDFR_MRI_ACC_L3, "html",
color =ifelse(correlations[["correct_encoding_to_delay_avg"]][["p"]]$XDFR_MRI_ACC_L3 < 0.05, "black", "grey")),
BPRS_TOT.x = cell_spec(BPRS_TOT.x, "html",
color =ifelse(correlations[["correct_encoding_to_delay_avg"]][["p"]]$BPRS_TOT.x < 0.05, "black", "grey"))
) %>%
select(condition,omnibus_span_no_DFR_MRI,XDFR_MRI_ACC_L3,BPRS_TOT.x) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "Template Encoding to Delay" = 3)))
| condition | omnibus_span_no_DFR_MRI | XDFR_MRI_ACC_L3 | BPRS_TOT.x |
|---|---|---|---|
| low load incorrect | -0.142923394839187 | -0.0628260628135304 | -0.0408537364515526 |
| high load incorrect | 0.0759251542107984 | 0.0495684274120333 | -0.0412615577186966 |
| low load correct | -0.0928325164173533 | -0.196674391063135 | -0.00556839298229165 |
| high load correct | 0.000452855700311408 | -0.043525624420269 | -0.126213928286512 |
This measure can only distinguish load.
correct_encoding_to_delay_avg <- fisherz(similarity_temp[["correct_encoding_to_delay_avg"]])
t.test(correct_encoding_to_delay_avg[,4],correct_encoding_to_delay_avg[,2],paired=TRUE)
##
## Paired t-test
##
## data: correct_encoding_to_delay_avg[, 4] and correct_encoding_to_delay_avg[, 2]
## t = -1.2422, df = 169, p-value = 0.2159
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.026610708 0.006055906
## sample estimates:
## mean of the differences
## -0.0102774
t.test(correct_encoding_to_delay_avg[,4],correct_encoding_to_delay_avg[,3],paired=TRUE)
##
## Paired t-test
##
## data: correct_encoding_to_delay_avg[, 4] and correct_encoding_to_delay_avg[, 3]
## t = 5.6767, df = 169, p-value = 5.86e-08
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.03072772 0.06349347
## sample estimates:
## mean of the differences
## 0.04711059
(encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["omnibus"]][["high_load_correct"]] + encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["omnibus"]][["high_load_incorrect"]]) /
(encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["omnibus"]][["low_load_correct"]] +
encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["omnibus"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs omnibus")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["L3_Acc"]][["high_load_correct"]] + encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["L3_Acc"]][["high_load_incorrect"]]) /
(encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["L3_Acc"]][["low_load_correct"]] +
encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["L3_Acc"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs High Load Accuracy")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["BPRS"]][["high_load_correct"]] + encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["BPRS"]][["high_load_incorrect"]]) /
(encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["BPRS"]][["low_load_correct"]] +
encoding_to_delay_plots[["correct_encoding_to_delay_avg"]][["BPRS"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs BPRS Total")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
No significant relationships here - only thing that even trends is accuracy. This is very different from the fusiform, where there were significant relationships between similarity and all three behavioral measures.
correlations[["correct_encoding_to_correct_delay"]][c(2,3,5),] %>%
mutate(
condition = row.names(.),
r = cell_spec(r, "html",
color =ifelse(p < 0.05, "black", "grey"))
) %>%
select(condition,r) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above(c(" ", "Template Encoding to Template Delay" = 1))
| condition | r |
|---|---|
| omnibus_span_no_DFR_MRI | -0.0459543951859351 |
| XDFR_MRI_ACC_L3 | -0.141366038958814 |
| BPRS_TOT.x | -0.0350881739554959 |
encoding_to_delay_plots[["correct_encoding_to_correct_delay"]][["omnibus"]]
encoding_to_delay_plots[["correct_encoding_to_correct_delay"]][["L3_Acc"]]
encoding_to_delay_plots[["correct_encoding_to_correct_delay"]][["BPRS"]]
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
Now, let’s try encoding to probe. This measure does have any relationships with behavior, though some of the individual correlation measures can distinguish between high and low load trials.
encoding_to_probe_plots <- list()
for (i in c(7,11,13)){
colnames(similarity_temp[[i]]) <- unlist(similarity_temp[[1]][[1]])
similarity_temp[[i]][similarity_temp[[i]]==0] <- NA
temp_plot_data <- cbind.data.frame(data_to_plot,similarity_temp[[i]])
encoding_to_probe_plots[[names(similarity_temp)[i]]][["omnibus"]][["low_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`low load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Incorrect trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["omnibus"]][["high_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`high load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Incorrect trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["omnibus"]][["low_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`low load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Correct trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["omnibus"]][["high_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`high load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Correct trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["low_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`low load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Incorrect trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["high_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`high load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Incorrect trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["low_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`low load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Correct trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["high_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`high load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Correct trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["BPRS"]][["low_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`low load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Incorrect trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["BPRS"]][["high_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`high load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Incorrect trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["BPRS"]][["low_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`low load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Correct trials")+
theme_classic()
encoding_to_probe_plots[[names(similarity_temp)[i]]][["BPRS"]][["high_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`high load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Correct trials")+
theme_classic()
}
temp_plot_data <- cbind.data.frame(data_to_plot,similarity_temp[["correct_encoding_to_correct_probe"]])
colnames(temp_plot_data)[9] <- "correct_encoding_probe"
encoding_to_probe_plots[["correct_encoding_to_correct_probe"]][["omnibus"]] <- ggplot(data =
temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=correct_encoding_probe))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Template encoding/probe vs Omnibus Span")+
theme_classic()
encoding_to_probe_plots[["correct_encoding_to_correct_probe"]][["BPRS"]] <- ggplot(data =
temp_plot_data,aes(x=BPRS_TOT.x,y=correct_encoding_probe))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Template encoding/probe vs BPRS")+
theme_classic()
encoding_to_probe_plots[["correct_encoding_to_correct_probe"]][["L3_Acc"]] <- ggplot(data =
temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=correct_encoding_probe))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Template encoding/probe vs L3 accuracy")+
theme_classic()
In these, graphs, if a correlation is black, it is below p < 0.05; if it is not, it is grey.
correlations = list()
for (i in c(7,11,13)){
colnames(similarity_temp[[i]]) <- unlist(similarity_temp[[1]][[1]])
temp_list <- list(r = data.frame(matrix(nrow=4,ncol=6)), p = data.frame(matrix(nrow=4,ncol=6)))
for (behav in seq.int(2,7)){
for (sim in seq.int(1,4)){
temp_corr <- cor.test(similarity_temp[[i]][,sim],data_to_plot[,behav])
temp_list[["r"]][sim,behav-1] <- temp_corr$estimate
temp_list[["p"]][sim,behav-1] <- temp_corr$p.value
}
}
colnames(temp_list[["r"]]) <- colnames(data_to_plot)[2:7]
rownames(temp_list[["r"]]) <- colnames(similarity_temp[[i]])
colnames(temp_list[["p"]]) <- colnames(data_to_plot)[2:7]
rownames(temp_list[["p"]]) <- colnames(similarity_temp[[i]])
correlations[[names(similarity_temp)[i]]] <- temp_list
}
temp <- data.frame(r=matrix(nrow=6,ncol=1),p=matrix(nrow=6,ncol=1))
rownames(temp) <- colnames(data_to_plot)[2:7]
for (behav in seq.int(2,7)){
temp_corr <- cor.test(similarity_temp[["correct_encoding_to_correct_probe"]],data_to_plot[,behav])
temp$r[behav-1] <- temp_corr$estimate
temp$p[behav-2] <- temp_corr$p.value
}
correlations[["correct_encoding_to_correct_probe"]] <- temp
No correlations here.
correlations[["encoding_to_probe_avg"]][["r"]] %>%
mutate(
condition = row.names(.),
omnibus_span_no_DFR_MRI = cell_spec(omnibus_span_no_DFR_MRI, "html",
color =ifelse(correlations[["encoding_to_probe_avg"]][["p"]]$omnibus_span_no_DFR_MRI < 0.05, "black", "grey")),
XDFR_MRI_ACC_L3 = cell_spec(XDFR_MRI_ACC_L3, "html",
color =ifelse(correlations[["encoding_to_probe_avg"]][["p"]]$XDFR_MRI_ACC_L3 < 0.05, "black", "grey")),
BPRS_TOT.x = cell_spec(BPRS_TOT.x, "html",
color =ifelse(correlations[["encoding_to_probe_avg"]][["p"]]$BPRS_TOT.x < 0.05, "black", "grey"))
) %>%
select(condition,omnibus_span_no_DFR_MRI,XDFR_MRI_ACC_L3,BPRS_TOT.x) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "Individual Encoding to Individual Probe" = 3)))
| condition | omnibus_span_no_DFR_MRI | XDFR_MRI_ACC_L3 | BPRS_TOT.x |
|---|---|---|---|
| low load incorrect | -0.11308500810141 | -0.0541219414163688 | 0.103429693788007 |
| high load incorrect | -0.11396284256804 | 0.0182947763540957 | -0.0340578810731565 |
| low load correct | -0.132274001447299 | -0.0397258232481695 | 0.103288755943387 |
| high load correct | 0.0152588450734783 | 0.106918484955607 | 0.127349181899464 |
This measure can distinguish between load but not accuracy.
encoding_to_probe_avg <- fisherz(similarity_temp[["encoding_to_probe_avg"]])
t.test(encoding_to_probe_avg[,4],encoding_to_probe_avg[,2],paired=TRUE)
##
## Paired t-test
##
## data: encoding_to_probe_avg[, 4] and encoding_to_probe_avg[, 2]
## t = 0.28594, df = 169, p-value = 0.7753
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.01497926 0.02005367
## sample estimates:
## mean of the differences
## 0.002537202
t.test(encoding_to_probe_avg[,4],encoding_to_probe_avg[,3],paired=TRUE)
##
## Paired t-test
##
## data: encoding_to_probe_avg[, 4] and encoding_to_probe_avg[, 3]
## t = 5.3051, df = 169, p-value = 3.493e-07
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.02220810 0.04853148
## sample estimates:
## mean of the differences
## 0.03536979
(encoding_to_probe_plots[["encoding_to_probe_avg"]][["omnibus"]][["high_load_correct"]] + encoding_to_probe_plots[["encoding_to_probe_avg"]][["omnibus"]][["high_load_incorrect"]]) /
(encoding_to_probe_plots[["encoding_to_probe_avg"]][["omnibus"]][["low_load_correct"]] +
encoding_to_probe_plots[["encoding_to_probe_avg"]][["omnibus"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs omnibus")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_probe_plots[["encoding_to_probe_avg"]][["L3_Acc"]][["high_load_correct"]] + encoding_to_probe_plots[["encoding_to_probe_avg"]][["L3_Acc"]][["high_load_incorrect"]]) /
(encoding_to_probe_plots[["encoding_to_probe_avg"]][["L3_Acc"]][["low_load_correct"]] +
encoding_to_probe_plots[["encoding_to_probe_avg"]][["L3_Acc"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs High Load Accuracy")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_probe_plots[["encoding_to_probe_avg"]][["BPRS"]][["high_load_correct"]] + encoding_to_probe_plots[["encoding_to_probe_avg"]][["BPRS"]][["high_load_incorrect"]]) /
(encoding_to_probe_plots[["encoding_to_probe_avg"]][["BPRS"]][["low_load_correct"]] +
encoding_to_probe_plots[["encoding_to_probe_avg"]][["BPRS"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs BPRS Total")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
Significant correlation between similarity at the low load correct trials and span.
correlations[["encoding_to_correct_probe_avg"]][["r"]] %>%
mutate(
condition = row.names(.),
omnibus_span_no_DFR_MRI = cell_spec(omnibus_span_no_DFR_MRI, "html",
color =ifelse(correlations[["encoding_to_correct_probe_avg"]][["p"]]$omnibus_span_no_DFR_MRI < 0.05, "black", "grey")),
XDFR_MRI_ACC_L3 = cell_spec(XDFR_MRI_ACC_L3, "html",
color =ifelse(correlations[["encoding_to_correct_probe_avg"]][["p"]]$XDFR_MRI_ACC_L3 < 0.05, "black", "grey")),
BPRS_TOT.x = cell_spec(BPRS_TOT.x, "html",
color =ifelse(correlations[["encoding_to_correct_probe_avg"]][["p"]]$BPRS_TOT.x < 0.05, "black", "grey"))
) %>%
select(condition,omnibus_span_no_DFR_MRI,XDFR_MRI_ACC_L3,BPRS_TOT.x) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "Individual Encoding to Template probe" = 3)))
| condition | omnibus_span_no_DFR_MRI | XDFR_MRI_ACC_L3 | BPRS_TOT.x |
|---|---|---|---|
| low load incorrect | -0.150066752033504 | 0.0531966246889002 | -0.0201338877626089 |
| high load incorrect | -0.0839435554221092 | 0.112049282067899 | 0.0958919786614344 |
| low load correct | -0.183810479715138 | -0.148667525923073 | 0.0188064366510966 |
| high load correct | 0.0164872329730542 | 0.053809718240409 | 0.101567168245189 |
This measure distinguishes load but not accuracy.
encoding_to_correct_probe_avg <- fisherz(similarity_temp[["encoding_to_correct_probe_avg"]])
t.test(encoding_to_correct_probe_avg[,4],encoding_to_correct_probe_avg[,2],paired=TRUE)
##
## Paired t-test
##
## data: encoding_to_correct_probe_avg[, 4] and encoding_to_correct_probe_avg[, 2]
## t = 0.85892, df = 169, p-value = 0.3916
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.008718501 0.022148526
## sample estimates:
## mean of the differences
## 0.006715013
t.test(encoding_to_correct_probe_avg[,4],encoding_to_correct_probe_avg[,3],paired=TRUE)
##
## Paired t-test
##
## data: encoding_to_correct_probe_avg[, 4] and encoding_to_correct_probe_avg[, 3]
## t = 7.5967, df = 169, p-value = 1.98e-12
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.04235849 0.07210289
## sample estimates:
## mean of the differences
## 0.05723069
(encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["omnibus"]][["high_load_correct"]] + encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["omnibus"]][["high_load_incorrect"]]) /
(encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["omnibus"]][["low_load_correct"]] +
encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["omnibus"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs omnibus")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["L3_Acc"]][["high_load_correct"]] + encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["L3_Acc"]][["high_load_incorrect"]]) /
(encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["L3_Acc"]][["low_load_correct"]] +
encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["L3_Acc"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs High Load Accuracy")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["BPRS"]][["high_load_correct"]] + encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["BPRS"]][["high_load_incorrect"]]) /
(encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["BPRS"]][["low_load_correct"]] +
encoding_to_probe_plots[["encoding_to_correct_probe_avg"]][["BPRS"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs BPRS Total")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
No significant relationships.
correlations[["correct_encoding_to_probe_avg"]][["r"]] %>%
mutate(
condition = row.names(.),
omnibus_span_no_DFR_MRI = cell_spec(omnibus_span_no_DFR_MRI, "html",
color =ifelse(correlations[["correct_encoding_to_probe_avg"]][["p"]]$omnibus_span_no_DFR_MRI < 0.05, "black", "grey")),
XDFR_MRI_ACC_L3 = cell_spec(XDFR_MRI_ACC_L3, "html",
color =ifelse(correlations[["correct_encoding_to_probe_avg"]][["p"]]$XDFR_MRI_ACC_L3 < 0.05, "black", "grey")),
BPRS_TOT.x = cell_spec(BPRS_TOT.x, "html",
color =ifelse(correlations[["correct_encoding_to_probe_avg"]][["p"]]$BPRS_TOT.x < 0.05, "black", "grey"))
) %>%
select(condition,omnibus_span_no_DFR_MRI,XDFR_MRI_ACC_L3,BPRS_TOT.x) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "Template Encoding to probe" = 3)))
| condition | omnibus_span_no_DFR_MRI | XDFR_MRI_ACC_L3 | BPRS_TOT.x |
|---|---|---|---|
| low load incorrect | -0.130476566628407 | -0.170014618747209 | 0.159230940572967 |
| high load incorrect | -0.0620210617698407 | 0.0855301466624124 | 0.0493015835643384 |
| low load correct | -0.0937808861586807 | -0.0468728356263741 | 0.0455312927087539 |
| high load correct | -0.126196571387953 | 0.0143330465570246 | 0.0225574514433589 |
Not predictive of anything here.
correct_encoding_to_probe_avg <- fisherz(similarity_temp[["correct_encoding_to_probe_avg"]])
t.test(correct_encoding_to_probe_avg[,4],correct_encoding_to_probe_avg[,2],paired=TRUE)
##
## Paired t-test
##
## data: correct_encoding_to_probe_avg[, 4] and correct_encoding_to_probe_avg[, 2]
## t = -1.847, df = 169, p-value = 0.06649
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.032294757 0.001073984
## sample estimates:
## mean of the differences
## -0.01561039
t.test(correct_encoding_to_probe_avg[,4],correct_encoding_to_probe_avg[,3],paired=TRUE)
##
## Paired t-test
##
## data: correct_encoding_to_probe_avg[, 4] and correct_encoding_to_probe_avg[, 3]
## t = -1.8917, df = 169, p-value = 0.06024
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.0261349659 0.0005568271
## sample estimates:
## mean of the differences
## -0.01278907
(encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["omnibus"]][["high_load_correct"]] + encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["omnibus"]][["high_load_incorrect"]]) /
(encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["omnibus"]][["low_load_correct"]] +
encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["omnibus"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs omnibus")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["L3_Acc"]][["high_load_correct"]] + encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["L3_Acc"]][["high_load_incorrect"]]) /
(encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["L3_Acc"]][["low_load_correct"]] +
encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["L3_Acc"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs High Load Accuracy")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["BPRS"]][["high_load_correct"]] + encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["BPRS"]][["high_load_incorrect"]]) /
(encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["BPRS"]][["low_load_correct"]] +
encoding_to_probe_plots[["correct_encoding_to_probe_avg"]][["BPRS"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs BPRS Total")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
Nothing significant.
correlations[["correct_encoding_to_correct_probe"]][c(2,3,5),] %>%
mutate(
condition = row.names(.),
r = cell_spec(r, "html",
color =ifelse(p < 0.05, "black", "grey"))
) %>%
select(condition,r) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above(c(" ", "Template Encoding to Template Probe" = 1))
| condition | r |
|---|---|
| omnibus_span_no_DFR_MRI | -0.0408674150212371 |
| XDFR_MRI_ACC_L3 | -0.00289072384410606 |
| BPRS_TOT.x | 0.0402901341367946 |
encoding_to_probe_plots[["correct_encoding_to_correct_probe"]][["omnibus"]]
encoding_to_probe_plots[["correct_encoding_to_correct_probe"]][["L3_Acc"]]
encoding_to_probe_plots[["correct_encoding_to_correct_probe"]][["BPRS"]]
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
delay_to_probe_plots <- list()
for (i in c(3,8,9)){
colnames(similarity_temp[[i]]) <- unlist(similarity_temp[[1]][[1]])
similarity_temp[[i]][similarity_temp[[i]]==0] <- NA
temp_plot_data <- cbind.data.frame(data_to_plot,similarity_temp[[i]])
delay_to_probe_plots[[names(similarity_temp)[i]]][["omnibus"]][["low_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`low load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Incorrect trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["omnibus"]][["high_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`high load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Incorrect trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["omnibus"]][["low_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`low load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Correct trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["omnibus"]][["high_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=`high load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Correct trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["low_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`low load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Incorrect trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["high_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`high load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Incorrect trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["low_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`low load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Correct trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["L3_Acc"]][["high_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=`high load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Correct trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["BPRS"]][["low_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`low load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Incorrect trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["BPRS"]][["high_load_incorrect"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`high load incorrect`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Incorrect trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["BPRS"]][["low_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`low load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Low Load - Correct trials")+
theme_classic()
delay_to_probe_plots[[names(similarity_temp)[i]]][["BPRS"]][["high_load_correct"]] <- ggplot(data = temp_plot_data,aes(x=BPRS_TOT.x,y=`high load correct`))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("High Load - Correct trials")+
theme_classic()
}
temp_plot_data <- cbind.data.frame(data_to_plot,similarity_temp[["correct_delay_to_correct_probe"]])
colnames(temp_plot_data)[9] <- "correct_delay_probe"
delay_to_probe_plots[["correct_delay_to_correct_probe"]][["omnibus"]] <- ggplot(data =
temp_plot_data,aes(x=omnibus_span_no_DFR_MRI,y=correct_delay_probe))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Template delay/probe vs Omnibus Span")+
theme_classic()
delay_to_probe_plots[["correct_delay_to_correct_probe"]][["BPRS"]] <- ggplot(data =
temp_plot_data,aes(x=BPRS_TOT.x,y=correct_delay_probe))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Template delay/probe vs BPRS")+
theme_classic()
delay_to_probe_plots[["correct_delay_to_correct_probe"]][["L3_Acc"]] <- ggplot(data =
temp_plot_data,aes(x=XDFR_MRI_ACC_L3,y=correct_delay_probe))+
geom_point()+
stat_smooth(method="lm")+
ylab("Similarity")+
ggtitle("Template delay/probe vs L3 accuracy")+
theme_classic()
In these, graphs, if a correlation is black, it is below p < 0.05; if it is not, it is grey.
correlations = list()
for (i in c(3,8,9)){
colnames(similarity_temp[[i]]) <- unlist(similarity_temp[[1]][[1]])
temp_list <- list(r = data.frame(matrix(nrow=4,ncol=6)), p = data.frame(matrix(nrow=4,ncol=6)))
for (behav in seq.int(2,7)){
for (sim in seq.int(1,4)){
temp_corr <- cor.test(similarity_temp[[i]][,sim],data_to_plot[,behav])
temp_list[["r"]][sim,behav-1] <- temp_corr$estimate
temp_list[["p"]][sim,behav-1] <- temp_corr$p.value
}
}
colnames(temp_list[["r"]]) <- colnames(data_to_plot)[2:7]
rownames(temp_list[["r"]]) <- colnames(similarity_temp[[i]])
colnames(temp_list[["p"]]) <- colnames(data_to_plot)[2:7]
rownames(temp_list[["p"]]) <- colnames(similarity_temp[[i]])
correlations[[names(similarity_temp)[i]]] <- temp_list
}
temp <- data.frame(r=matrix(nrow=6,ncol=1),p=matrix(nrow=6,ncol=1))
rownames(temp) <- colnames(data_to_plot)[2:7]
for (behav in seq.int(2,7)){
temp_corr <- cor.test(similarity_temp[["correct_delay_to_correct_probe"]],data_to_plot[,behav])
temp$r[behav-1] <- temp_corr$estimate
temp$p[behav-2] <- temp_corr$p.value
}
correlations[["correct_delay_to_correct_probe"]] <- temp
Here, we see a significant relationship between similarity at low load correct and accuracy.
correlations[["delay_to_probe_avg"]][["r"]] %>%
mutate(
condition = row.names(.),
omnibus_span_no_DFR_MRI = cell_spec(omnibus_span_no_DFR_MRI, "html",
color =ifelse(correlations[["delay_to_probe_avg"]][["p"]]$omnibus_span_no_DFR_MRI < 0.05, "black", "grey")),
XDFR_MRI_ACC_L3 = cell_spec(XDFR_MRI_ACC_L3, "html",
color =ifelse(correlations[["delay_to_probe_avg"]][["p"]]$XDFR_MRI_ACC_L3 < 0.05, "black", "grey")),
BPRS_TOT.x = cell_spec(BPRS_TOT.x, "html",
color =ifelse(correlations[["delay_to_probe_avg"]][["p"]]$BPRS_TOT.x < 0.05, "black", "grey"))
) %>%
select(condition,omnibus_span_no_DFR_MRI,XDFR_MRI_ACC_L3,BPRS_TOT.x) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "Individual delay to Individual Probe" = 3)))
| condition | omnibus_span_no_DFR_MRI | XDFR_MRI_ACC_L3 | BPRS_TOT.x |
|---|---|---|---|
| low load incorrect | 0.0404533077220652 | 0.0628571552248336 | -0.0386375340860521 |
| high load incorrect | -0.00167161364052409 | -0.128985499605605 | -0.127055140985302 |
| low load correct | -0.0370425226969129 | -0.15554895478408 | -0.0307619915617799 |
| high load correct | 0.0306317347246807 | -0.107039175871987 | -0.047238721353328 |
Can distinguish between load but not accuracy.
delay_to_probe_avg <- fisherz(similarity_temp[["delay_to_probe_avg"]])
t.test(delay_to_probe_avg[,4],delay_to_probe_avg[,2],paired=TRUE)
##
## Paired t-test
##
## data: delay_to_probe_avg[, 4] and delay_to_probe_avg[, 2]
## t = 1.388, df = 169, p-value = 0.167
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.005455567 0.031294815
## sample estimates:
## mean of the differences
## 0.01291962
t.test(delay_to_probe_avg[,4],delay_to_probe_avg[,3],paired=TRUE)
##
## Paired t-test
##
## data: delay_to_probe_avg[, 4] and delay_to_probe_avg[, 3]
## t = 3.0898, df = 169, p-value = 0.002343
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.007607663 0.034529418
## sample estimates:
## mean of the differences
## 0.02106854
(delay_to_probe_plots[["delay_to_probe_avg"]][["omnibus"]][["high_load_correct"]] + delay_to_probe_plots[["delay_to_probe_avg"]][["omnibus"]][["high_load_incorrect"]]) /
(delay_to_probe_plots[["delay_to_probe_avg"]][["omnibus"]][["low_load_correct"]] +
delay_to_probe_plots[["delay_to_probe_avg"]][["omnibus"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs omnibus")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(delay_to_probe_plots[["delay_to_probe_avg"]][["L3_Acc"]][["high_load_correct"]] + delay_to_probe_plots[["delay_to_probe_avg"]][["L3_Acc"]][["high_load_incorrect"]]) /
(delay_to_probe_plots[["delay_to_probe_avg"]][["L3_Acc"]][["low_load_correct"]] +
delay_to_probe_plots[["delay_to_probe_avg"]][["L3_Acc"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs High Load Accuracy")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(delay_to_probe_plots[["delay_to_probe_avg"]][["BPRS"]][["high_load_correct"]] + delay_to_probe_plots[["delay_to_probe_avg"]][["BPRS"]][["high_load_incorrect"]]) /
(delay_to_probe_plots[["delay_to_probe_avg"]][["BPRS"]][["low_load_correct"]] +
delay_to_probe_plots[["delay_to_probe_avg"]][["BPRS"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs BPRS Total")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
No relationships here.
correlations[["delay_to_correct_probe_avg"]][["r"]] %>%
mutate(
condition = row.names(.),
omnibus_span_no_DFR_MRI = cell_spec(omnibus_span_no_DFR_MRI, "html",
color =ifelse(correlations[["delay_to_correct_probe_avg"]][["p"]]$omnibus_span_no_DFR_MRI < 0.05, "black", "grey")),
XDFR_MRI_ACC_L3 = cell_spec(XDFR_MRI_ACC_L3, "html",
color =ifelse(correlations[["delay_to_correct_probe_avg"]][["p"]]$XDFR_MRI_ACC_L3 < 0.05, "black", "grey")),
BPRS_TOT.x = cell_spec(BPRS_TOT.x, "html",
color =ifelse(correlations[["delay_to_correct_probe_avg"]][["p"]]$BPRS_TOT.x < 0.05, "black", "grey"))
) %>%
select(condition,omnibus_span_no_DFR_MRI,XDFR_MRI_ACC_L3,BPRS_TOT.x) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "Individual delay to Template probe" = 3)))
| condition | omnibus_span_no_DFR_MRI | XDFR_MRI_ACC_L3 | BPRS_TOT.x |
|---|---|---|---|
| low load incorrect | -0.00258392767984013 | -0.0647831749962899 | -0.108872380605308 |
| high load incorrect | -0.00128734837611086 | -0.0624134292574505 | -0.110652830854019 |
| low load correct | 0.112179876532588 | -0.0394618061801707 | -0.137319901063414 |
| high load correct | 0.0226143436002303 | -0.0672336450414832 | -0.0560933051628179 |
delay_to_correct_probe_avg <- fisherz(similarity_temp[["delay_to_correct_probe_avg"]])
t.test(delay_to_correct_probe_avg[,4],delay_to_correct_probe_avg[,2],paired=TRUE)
##
## Paired t-test
##
## data: delay_to_correct_probe_avg[, 4] and delay_to_correct_probe_avg[, 2]
## t = -0.97968, df = 169, p-value = 0.3286
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.025265109 0.008505748
## sample estimates:
## mean of the differences
## -0.008379681
t.test(delay_to_correct_probe_avg[,4],delay_to_correct_probe_avg[,3],paired=TRUE)
##
## Paired t-test
##
## data: delay_to_correct_probe_avg[, 4] and delay_to_correct_probe_avg[, 3]
## t = 0.65507, df = 169, p-value = 0.5133
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.008764226 0.017469311
## sample estimates:
## mean of the differences
## 0.004352543
(delay_to_probe_plots[["delay_to_correct_probe_avg"]][["omnibus"]][["high_load_correct"]] + delay_to_probe_plots[["delay_to_correct_probe_avg"]][["omnibus"]][["high_load_incorrect"]]) /
(delay_to_probe_plots[["delay_to_correct_probe_avg"]][["omnibus"]][["low_load_correct"]] +
delay_to_probe_plots[["delay_to_correct_probe_avg"]][["omnibus"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs omnibus")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(delay_to_probe_plots[["delay_to_correct_probe_avg"]][["L3_Acc"]][["high_load_correct"]] + delay_to_probe_plots[["delay_to_correct_probe_avg"]][["L3_Acc"]][["high_load_incorrect"]]) /
(delay_to_probe_plots[["delay_to_correct_probe_avg"]][["L3_Acc"]][["low_load_correct"]] +
delay_to_probe_plots[["delay_to_correct_probe_avg"]][["L3_Acc"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs High Load Accuracy")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(delay_to_probe_plots[["delay_to_correct_probe_avg"]][["BPRS"]][["high_load_correct"]] + delay_to_probe_plots[["delay_to_correct_probe_avg"]][["BPRS"]][["high_load_incorrect"]]) /
(delay_to_probe_plots[["delay_to_correct_probe_avg"]][["BPRS"]][["low_load_correct"]] +
delay_to_probe_plots[["delay_to_correct_probe_avg"]][["BPRS"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs BPRS Total")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
Significant relationship between similarity on high load incorrect trials and span.
correlations[["correct_delay_to_probe_avg"]][["r"]] %>%
mutate(
condition = row.names(.),
omnibus_span_no_DFR_MRI = cell_spec(omnibus_span_no_DFR_MRI, "html",
color =ifelse(correlations[["correct_delay_to_probe_avg"]][["p"]]$omnibus_span_no_DFR_MRI < 0.05, "black", "grey")),
XDFR_MRI_ACC_L3 = cell_spec(XDFR_MRI_ACC_L3, "html",
color =ifelse(correlations[["correct_delay_to_probe_avg"]][["p"]]$XDFR_MRI_ACC_L3 < 0.05, "black", "grey")),
BPRS_TOT.x = cell_spec(BPRS_TOT.x, "html",
color =ifelse(correlations[["correct_delay_to_probe_avg"]][["p"]]$BPRS_TOT.x < 0.05, "black", "grey"))
) %>%
select(condition,omnibus_span_no_DFR_MRI,XDFR_MRI_ACC_L3,BPRS_TOT.x) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above((c(" ", "Template delay to probe" = 3)))
| condition | omnibus_span_no_DFR_MRI | XDFR_MRI_ACC_L3 | BPRS_TOT.x |
|---|---|---|---|
| low load incorrect | -0.0126468441837965 | 0.0545980128216875 | -0.0531474614912401 |
| high load incorrect | 0.153961519152477 | 0.0741048231101815 | -0.00923587995567198 |
| low load correct | 0.0805550991240167 | -0.112692711347637 | -0.00277214369435476 |
| high load correct | 0.127000730668868 | 0.0684247831820997 | -0.034255502613284 |
Can distinguish between load but not accuracy.
correct_delay_to_probe_avg <- fisherz(similarity_temp[["correct_delay_to_probe_avg"]])
t.test(correct_delay_to_probe_avg[,4],correct_delay_to_probe_avg[,2],paired=TRUE)
##
## Paired t-test
##
## data: correct_delay_to_probe_avg[, 4] and correct_delay_to_probe_avg[, 2]
## t = -1.3229, df = 169, p-value = 0.1877
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.029060650 0.005739938
## sample estimates:
## mean of the differences
## -0.01166036
t.test(correct_delay_to_probe_avg[,4],correct_delay_to_probe_avg[,3],paired=TRUE)
##
## Paired t-test
##
## data: correct_delay_to_probe_avg[, 4] and correct_delay_to_probe_avg[, 3]
## t = 2.4113, df = 169, p-value = 0.01697
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.003593674 0.036043569
## sample estimates:
## mean of the differences
## 0.01981862
(delay_to_probe_plots[["correct_delay_to_probe_avg"]][["omnibus"]][["high_load_correct"]] + delay_to_probe_plots[["correct_delay_to_probe_avg"]][["omnibus"]][["high_load_incorrect"]]) /
(delay_to_probe_plots[["correct_delay_to_probe_avg"]][["omnibus"]][["low_load_correct"]] +
delay_to_probe_plots[["correct_delay_to_probe_avg"]][["omnibus"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs omnibus")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(delay_to_probe_plots[["correct_delay_to_probe_avg"]][["L3_Acc"]][["high_load_correct"]] + delay_to_probe_plots[["correct_delay_to_probe_avg"]][["L3_Acc"]][["high_load_incorrect"]]) /
(delay_to_probe_plots[["correct_delay_to_probe_avg"]][["L3_Acc"]][["low_load_correct"]] +
delay_to_probe_plots[["correct_delay_to_probe_avg"]][["L3_Acc"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs High Load Accuracy")
## Warning: Removed 57 rows containing non-finite values (stat_smooth).
## Warning: Removed 57 rows containing missing values (geom_point).
(delay_to_probe_plots[["correct_delay_to_probe_avg"]][["BPRS"]][["high_load_correct"]] + delay_to_probe_plots[["correct_delay_to_probe_avg"]][["BPRS"]][["high_load_incorrect"]]) /
(delay_to_probe_plots[["correct_delay_to_probe_avg"]][["BPRS"]][["low_load_correct"]] +
delay_to_probe_plots[["correct_delay_to_probe_avg"]][["BPRS"]][["low_load_incorrect"]])+
plot_annotation(title = "Individual trials vs BPRS Total")
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).
## Warning: Removed 58 rows containing non-finite values (stat_smooth).
## Warning: Removed 58 rows containing missing values (geom_point).
Very small correlation between accuracy and similarity, but is significant.
correlations[["correct_delay_to_correct_probe"]][c(2,3,5),] %>%
mutate(
condition = row.names(.),
r = cell_spec(r, "html",
color =ifelse(p < 0.05, "black", "grey"))
) %>%
select(condition,r) %>%
kable(format = "html", escape = F) %>%
kable_styling("striped", full_width = F) %>%
add_header_above(c(" ", "Template delay to Template Probe" = 1))
| condition | r |
|---|---|
| omnibus_span_no_DFR_MRI | 0.084320931941858 |
| XDFR_MRI_ACC_L3 | -0.0730407389792616 |
| BPRS_TOT.x | -0.0540088794276245 |
delay_to_probe_plots[["correct_delay_to_correct_probe"]][["omnibus"]]
delay_to_probe_plots[["correct_delay_to_correct_probe"]][["L3_Acc"]]
delay_to_probe_plots[["correct_delay_to_correct_probe"]][["BPRS"]]
## Warning: Removed 1 rows containing non-finite values (stat_smooth).
## Warning: Removed 1 rows containing missing values (geom_point).